1 Introduction

There have been considerable attempts toward stability analysis of nonlinear systems in the presence of exogenous inputs over the last few decades. In particular, Sontag [22] introduced the notion of input-to-state stability (ISS) which is indeed a generalization of \(H_\infty \) stability for nonlinear systems. Many applications of ISS in analysis and design of feedback systems have been reported [25]. A variant of ISS notion was introduced in [24] extending \(H_2\) stability to nonlinear systems. This generalization is called integral input-to-state stability (iISS) which was studied for continuous-time systems in [4], followed by an investigation into iISS of discrete-time systems in [1]. As long as we are interested in stability analysis with respect to compact sets, it has been established that iISS is a more general concept rather than ISS and so every ISS system is also iISS, while the converse is not necessarily true [24].

There is a wide variety of dynamical systems that cannot be simply described either by differential or difference equations. This gives rise to the so-called hybrid systems that combine both continuous-time (flows) and discrete-time (jumps) behaviors. Significant contributions concerned with modeling of hybrid systems have been developed in [10]. In particular, a framework was developed in [10] which not only models a wide range of hybrid systems, but also allows the study of stability and robustness of such systems.

This paper investigates iISS for hybrid systems modeled by the framework in [10]. Although the notion of iISS is well-understood for switched and impulsive systems (cf. [15] and [11] for more details), to the best of our knowledge, no further generalization of iISS being applicable to a wide variety of hybrid systems has been developed yet. Toward this end, we provide a Lyapunov characterization of iISS unifying and generalizing the existing theory for pure continuous-time and pure discrete-time systems. Furthermore, we relate iISS to dissipativity and detectability notions. We also establish robustness of the iISS property to vanishing perturbations. We finally illustrate the effectiveness of our results by application to determination of a maximum allowable sampling period (MASP) guaranteeing iISS for sampled-data systems with an emulated controller. To be more precise, we show that if a continuous-time controller renders a closed-loop system iISS, the iISS property of the closed-loop control system is preserved under an emulation-based digital implementation if the sampling period is taken less than the corresponding MASP.

The rest of this paper is organized as follows: First we introduce our notation in Sect. 2. In Sect.  3, a description of hybrid systems, solutions and stability notions are given. The main results are presented in Sect. 4. Section 5 gives the iISS property of sampled-data control systems. Section 6 provides the concluding remarks.

2 Notation

In this paper, \(\mathbb R_{\ge 0}\) (\(\mathbb R_{> 0}\)) and \({\mathbb {Z}}_{\ge 0}\) (\({\mathbb {Z}}_{> 0}\)) are nonnegative (positive) real and nonnegative (positive) integer numbers, respectively. \(\mathbb {B}\) is the open unit ball in \(\mathbb R^{n}\). The standard Euclidean norm is denoted by \(\left| \cdot \right| \). Given a set \({\mathcal {A}}\subset \mathbb R^{n}\), \(\overline{{\mathcal {A}}}\) denotes its closure. \(\left| x\right| _{\mathcal {A}}\) denotes \(\inf \nolimits _{y \in {\mathcal {A}}}\left| x-y\right| \) for a closed set \({\mathcal {A}}\subset \mathbb R^{n}\) and any point \(x \in \mathbb R^{n}\). Given an open set \(\mathcal {X} \subset \mathbb R^{n}\) containing a compact set \({\mathcal {A}}\), a function \(\omega :\mathcal {X} \rightarrow \mathbb R_{\ge 0}\) is a proper indicator for \({\mathcal {A}}\) on \(\mathcal {X}\) if \(\omega \) is continuous, \(\omega (x)=0\) if and only if \(x\in {\mathcal {A}}\), and \(\omega (x_i) \rightarrow +\infty \) when either \(x_i\) tends to the boundary of \(\mathcal {X}\) or \(\left| x_i\right| \rightarrow +\infty \). The identity function is denoted by \({{\mathrm{id}}}\). Composition of functions from \(\mathbb R\) to \(\mathbb R\) is denoted by the symbol \(\circ \).

A function \(\alpha :\mathbb R_{\ge 0}\rightarrow \mathbb R_{\ge 0}\) is said to be positive definite (\(\alpha \in \mathcal {PD}\)) if it is continuous, zero at zero and positive elsewhere. A positive definite function \(\alpha :\mathbb R_{\ge 0}\rightarrow \mathbb R_{\ge 0}\) is of class-\(\mathcal {K}\) (\(\alpha \in \mathcal {K}\)) if it is strictly increasing. It is of class-\(\mathcal {K}_\infty \) (\(\alpha \in \mathcal {K}_\infty \)) if \(\alpha \in \mathcal {K}\) and also \(\alpha (s) \rightarrow +\infty \) if \(s \rightarrow \infty \). A continuous function \(\gamma \) is of class-\(\mathcal {L}\) (\(\gamma \in \mathcal {L}\)) if it is nonincreasing and \(\lim _{s \rightarrow +\infty } \gamma (s) \rightarrow 0\). A function \(\beta :\mathbb R_{\ge 0}\times \mathbb R_{\ge 0}\rightarrow \mathbb R_{\ge 0}\) is of class-\(\mathcal {KL}\) (\(\beta \in \mathcal {KL}\)), if for each \(s \ge 0\), \(\beta (\cdot ,s) \in \mathcal {K}\), and for each \(r \ge 0\), \(\beta (r,\cdot ) \in \mathcal {L}\). A function \(\beta :\mathbb R_{\ge 0}\times \mathbb R_{\ge 0}\times \mathbb R_{\ge 0}\rightarrow \mathbb R_{\ge 0}\) is of class-\(\mathcal {KLL}\) (\(\beta \in \mathcal {KLL}\)), if for each \(s \ge 0\), \(\beta (\cdot ,s,\cdot ) \in \mathcal {KL}\) and \(\beta (\cdot ,\cdot ,s) \in \mathcal {KL}\). The interested reader is referred to [12] for more details about comparison functions.

3 Hybrid systems and stability definitions

Consider the following hybrid system with state \(x \in \mathcal {X}\) and input \(u \in \mathcal {U} \subset \mathbb R^d\) as follows

$$\begin{aligned} \mathcal {H}:= \left\{ \begin{array}{lcclr} \dot{x} &{}=&{} f(x,u) &{} \quad (x,u) \in \mathcal {C}\\ x^+ &{}=&{} g(x,u) &{} \quad (x,u) \in \mathcal {D}\end{array} \right. . \end{aligned}$$
(1)

The flow and jump sets are designated by \(\mathcal {C}\) and \(\mathcal {D}\), respectively. We denote the system (1) by a 6-tuple \(\mathcal {H}=(f,g,\mathcal {C},\mathcal {D},\mathcal {X},\mathcal {U})\). Basic regularity conditions borrowed from [6] are imposed on the system \(\mathcal {H}\) as follows

  1. (A1)

    \(\mathcal {X}\subset \mathbb R^{n}\) is open, \(\mathcal {U} \subset \mathbb R^d\) is closed, and \(\mathcal {C}\) and \(\mathcal {D}\) are relatively closed sets in \(\mathcal {X}\times \mathcal {U}\).

  2. (A2)

    \(f :\mathcal {C}\rightarrow \mathbb R^{n}\) and \(g :\mathcal {D}\rightarrow \mathcal {X}\) are continuous.

  3. (A3)

    For each \(x \in \mathcal {X}\) and each \(\epsilon \ge 0\) the set \(\{ f(x,u) \mid u \in \mathcal {U} \cap \epsilon \overline{\mathbb {B}} \}\) is convex.

Here, we refer to the assumptions (A1)–(A3) as Standing Assumptions. We note that the Standing Assumptions guarantee the well-posedness of \(\mathcal {H}\) (cf. [10, Chapter 6] for more details). Throughout the paper we suppose that the Standing Assumptions hold except otherwise stated.

The following definitions are needed in the sequel. A subset \(E \subset \mathbb R_{\ge 0}\times {\mathbb {Z}}_{\ge 0}\) is called a compact hybrid time domain if \(E=\bigcup _{j=0}^{J} ([t_j , t_{j+1}],j)\) for some finite sequence of real numbers \(0 = t_0 \le \cdots \le t_{J+1}\). We say E is a hybrid time domain if, for each pair \((T,J) \in E\), the set \(E \cap ([0,T] \times \{ 0,1, \dots ,J \})\) is a compact hybrid time domain. For each hybrid time domain E, there is a natural ordering of points: given \((t,j), (t^\prime ,j^\prime ) \in E\), \((t,j) \preceq (t^\prime ,j^\prime )\) if \(t+j \le t^\prime + j^\prime \), and \((t,j) \prec (t^\prime ,j^\prime )\) if \(t+j < t^{\prime }+j^{\prime }\). Given a hybrid time domain E, we define

$$\begin{aligned}&{{\sup }_t} E := \sup \{ t \in \mathbb R_{\ge 0}:\exists \, j \in {\mathbb {Z}}_{\ge 0}\text { such that } (t,j) \in E \} , \\&{{\sup }_j} E := \sup \{ j \in {\mathbb {Z}}_{\ge 0}:\exists \, t \in \mathbb R_{\ge 0}\text { such that } (t,j) \in E \} , \\&\mathrm {length} (E) := {{\sup }_t} E + {{\sup }_j} E . \end{aligned}$$

The operations \({{\sup }_t}\) and \({{\sup }_j}\) on a hybrid time domain E return the supremum of the \(\mathbb R\) and \({\mathbb {Z}}\) coordinates, respectively, of points in E. A function defined on a hybrid time domain is called a hybrid signal. Given a hybrid signal \(x : \mathrm {dom}x \rightarrow \mathcal {X}\), for any \(s \in \left[ 0, {{\sup }_t} \mathrm {dom}\,x \right] \backslash \{+\infty \}\), i(s) denotes the maximum index i such that \((s,i) \in \mathrm {dom}x\), that is, \(i(s) := \max \{ i \in {\mathbb {Z}}_{\ge 0}:(s,i) \in \mathrm {dom}x \}\). A hybrid signal \(x :\mathrm {dom}\,x \rightarrow \mathcal {X}\) is a hybrid arc if for each \(j \in {\mathbb {Z}}_{\ge 0}\), the function \(t \mapsto z(t,j)\) is locally absolutely continuous on the interval \(I^{j} := \{ t :(t,j) \in \mathrm {dom}x \}\). A hybrid signal \(u :\mathrm {dom} \, u \rightarrow \mathcal {U}\) is a hybrid input if for each \(j \in {\mathbb {Z}}_{\ge 0}\), \(u(\cdot ,j)\) is Lebesgue measurable and locally essentially bounded.

Let a hybrid signal \(v :\mathrm {dom} \, v \rightarrow \mathbb R^{n}\) be given. Let \((0,0), (t,j) \in \mathrm {dom} \, v\) such that \((0,0) \prec (t,j)\) and \({\varGamma }(v)\) denotes the set of \((t^{\prime },j^{\prime }) \in \mathrm {dom} \, v\) so that \((t^{\prime },j^{\prime }+1) \in \mathrm {dom} \, v\). Define

Let \(\gamma _1,\gamma _2 \in \mathcal {K}\) and let \(u :\mathrm {dom}\, u \rightarrow \mathcal {U}\) be a hybrid input such that for all \((t,j) \in \mathrm {dom}\,u\) the following hold

$$\begin{aligned}&\left\| u_{(t,j)}\right\| _{\gamma _1,\gamma _2} := \int _0^t \gamma _1 (\left| u(s,i(s))\right| ) \mathrm {d} s + { \sum _{{\begin{array}{c}(t^\prime ,j^\prime ) \in {\varGamma }(u),\\ (0,0) \preceq (t^{\prime },j^{\prime }) \prec (t,j)\end{array}}} \gamma _2 (\left| u(t^\prime ,j^\prime )\right| ) } < + \infty .&\end{aligned}$$

We denote the set of all such hybrid inputs by \(\mathcal {L}_{\gamma _1,\gamma _2}\). Also, if \(\left\| u_{(t,j)}\right\| _{\gamma _1,\gamma _2} < r\) for some \(r > 0\) and all \((t,j) \in \mathrm {dom}\,u\), we write \(u \in \mathcal {L}_{\gamma _1,\gamma _2} (r)\). Assume that the hybrid input \(u :\mathrm {dom}\,u \rightarrow \mathcal {U}\). For each \(T \in \left[ 0, \mathrm {length} (\mathrm {dom}\,u) \right] \backslash \{+\infty \}\), the hybrid input \(u_T :\mathrm {dom}\,u \rightarrow \mathcal {U}\) is defined by

$$\begin{aligned} u_T (t,j) = \left\{ \begin{array}{lr} u(t,j) &{} \qquad t+j \le T \\ 0 &{} \qquad t+j > T \end{array} \right. \end{aligned}$$

and is called the T-truncation of u. The set \(\mathcal {L}_{\gamma _1,\gamma _2}^e\) \(\left( \mathcal {L}_{\gamma _1,\gamma _2}^e (r)\right) \) consists of all hybrid inputs \(u(\cdot ,\cdot )\) with the property that for all \(T \in [0,\infty )\), \(u_T \in \mathcal {L}_{\gamma _1,\gamma _2} \left( u_T \in \mathcal {L}_{\gamma _1,\gamma _2} (r) \right) \), and is called the extended \(\mathcal {L}_{\gamma _1,\gamma _2}\)-space.

A hybrid arc \(x :\mathrm {dom}\,x \rightarrow \mathcal {X}\) and a hybrid input \(u :\mathrm {dom}\,u \rightarrow \mathcal {U}\) is a solution pair (xu) to \(\mathcal {H}\) if \(\mathrm {dom}x = \mathrm {dom}u\), \((x(0,0),u(0,0)) \in \mathcal {C}\cup \mathcal {D}\), and

  • for each \(j \in {\mathbb {Z}}_{\ge 0}\), \((x(t,j),u(t,j)) \in \mathcal {C}\) and \(\dot{x}=f(x(t,j),u(t,j))\) for almost all \(t \in I^j\) where \(I^{j}\) has nonempty interior;

  • for all \((t,j) \in {\varGamma }(x)\), \((x(t,j),u(t,j)) \in \mathcal {D}\) and \(x(t,j+1)=g(x(t,j),u(t,j))\).

A solution pair (xu) to \(\mathcal {H}\) is maximal if it cannot be extended, it is complete if \(\mathrm {dom}\,x\) is unbounded. A maximal solution to \(\mathcal {H}\) with the initial condition \(\xi := x(0,0)\) and the input u is denoted by \(x(\cdot ,\cdot ,\xi ,u)\). The set of all maximal solution pairs (xu) to \(\mathcal {H}\) with \(\xi := x(0,0) \in \mathcal {X}\) is designated by \(\varrho ^u (\xi )\).

3.1 Stability notions

Given the system \(\mathcal {H}\) and a nonempty and compact \({\mathcal {A}}\subset \mathcal {X}\), then \({\mathcal {A}}\) is called

  • 0-input pre-stable if for any \(\epsilon > 0\) there exists \(\delta > 0\) such that each solution pair \((x,0) \in \varrho ^u (\xi )\) with \(\left| \xi \right| _{\mathcal {A}}\le \delta \) satisfies \(\left| x(t,j,\xi ,0)\right| _{\mathcal {A}}\le \epsilon \) for all \((t,j) \in \mathrm {dom}\,x\).

  • 0-input pre-attractive if there exists \(\delta > 0\) such that each solution pair \((x,0) \in \varrho ^u (\xi )\) with \(\left| \xi \right| _{\mathcal {A}}\le \delta \) is bounded (with respect to \(\mathcal {X}\)) and if it is complete then \(\lim _{(t,j) \in \mathrm {dom}\, x , \, t+j \rightarrow +\infty } \left| x(t,j,\xi ,0)\right| _{\mathcal {A}}\rightarrow 0\).

  • 0-input pre-asymptotically stable (pre-AS) if it is both 0-input pre-stable and 0-input pre-attractive.

  • 0-input asymptotically stable (AS) if it is 0-input pre-AS and there exists \(\delta > 0\) such that each solution pair \((x,0) \in \varrho ^u (\xi )\) with \(\left| \xi \right| _{\mathcal {A}}\le \delta \) is complete.

It should be noted that the prefix ”pre-” emphasizes that not every solution requires to be complete. If all solutions are complete, then we drop the pre.

Definition 1

Let \({\mathcal {A}}\subset \mathcal {X}\) be a compact set. Also, let \(\omega \) be a proper indicator for \({\mathcal {A}}\) on \(\mathcal {X}\). The hybrid system \(\mathcal {H}\) is said to be pre-integral input-to-state stable (pre-iISS) with respect to \({\mathcal {A}}\) if there exist \(\alpha \in \mathcal {K}_\infty \), \(\gamma _1,\gamma _2 \in \mathcal {K}\) and \({\tilde{\beta }} \in \mathcal {KLL}\) such that for all \(u \in \mathcal {L}_{\gamma _1,\gamma _2}^e\), all \(\xi \in \mathcal {X}\), and all \((t,j) \in \mathrm {dom}\, x\), each solution pair (xu) to \(\mathcal {H}\) satisfies

$$\begin{aligned}&\alpha (\omega (x(t,j,\xi ,u))) \le {\tilde{\beta }} (\omega (\xi ),t,j) + \left\| u_{(t,j)}\right\| _{\gamma _1,\gamma _2} .&\end{aligned}$$
(2)

Remark 1

We point out that \(\alpha \) on the left-hand side of (2) is redundant. In particular, \(\mathcal {H}\) is pre-iISS with respect to \({\mathcal {A}}\) if and only if there exist \(\eta ,\gamma _1,\gamma _2 \in \mathcal {K}\) and \(\beta \in \mathcal {KLL}\) satisfying

$$\begin{aligned}&\omega (x(t,j,\xi ,u)) \le \beta (\omega (\xi ),t,j) + \eta \left( \left\| u_{(t,j)}\right\| _{\gamma _1,\gamma _2}\right) .&\end{aligned}$$

We, however, place emphasis on (2) for two reasons: firstly, (2) is consistent with the continuous-time and discrete-time counterparts in [1, 4]. Secondly, (2) simplifies exposition of proofs.

Definition 2

Given a compact set \({\mathcal {A}}\subset \mathcal {X}\), let \(\omega \) be a proper indicator for \({\mathcal {A}}\) on \(\mathcal {X}\). A smooth function \(V :\mathcal {X}\rightarrow \mathbb R_{\ge 0}\) is called an iISS Lyapunov function with respect to \((\omega ,\left| \cdot \right| )\) for (1) if there exist functions \(\alpha _1,\alpha _2 \in \mathcal {K}_\infty \), \(\sigma \in \mathcal {K}\), and \(\alpha _3 \in \mathcal {PD}\) such that

$$\begin{aligned} \alpha _1 (\omega (\xi ))&\le V(\xi ) \le \alpha _2 (\omega (\xi )) \quad \,\qquad \forall \xi \in \mathcal {X}, \end{aligned}$$
(3)
$$\begin{aligned} \langle \nabla V (\xi ),f(\xi ,u) \rangle&\le -\alpha _{3}(\omega (\xi )) + \sigma (\left| u\right| ) \qquad \forall (\xi ,u) \in \mathcal {C}, \end{aligned}$$
(4)
$$\begin{aligned} V (g(\xi ,u)) - V (\xi )&\le - \alpha _{3}(\omega (\xi )) + \sigma (\left| u\right| ) \qquad \forall (\xi ,u) \in \mathcal {D}. \end{aligned}$$
(5)

Definition 3

[4] A positive definite function \(W :\mathcal {X}\rightarrow \mathbb R_{\ge 0}\) is called a semi-proper if there exist \(\pi \in \mathcal {K}\), and a proper positive definite function \(W_0\) such that \(W (\cdot ) = \pi ( W_0 (\cdot ) )\).

The following definitions are required to relate pre-iISS to the hybrid invariance principle [10].

Definition 4

([21, Definition 6.2]) Given sets \({\mathcal {A}}, K \subset \mathcal {X}\), the distance to \({\mathcal {A}}\) is 0-input detectable relative to K for \(\mathcal {H}\) if every complete solution pair (x, 0) to \(\mathcal {H}\) such that \(x(t,j) \in K\) for all \((t,j) \in \mathrm {dom}x\) implies that \(\lim _{(t,j)\rightarrow +\infty ,(t,j)\in \mathrm {dom}x} \omega (x(t,j)) = 0\) where \(\omega \) is a proper indicator for \({\mathcal {A}}\) on \(\mathcal {X}\).

Definition 5

Let \(\omega \) be a proper indicator for \({\mathcal {A}}\) on \(\mathcal {X}\). \(\mathcal {H}\) is said to be smoothly dissipative with respect to \({\mathcal {A}}\) if there exists a smooth function \(V :\mathcal {X}\rightarrow \mathbb R_{\ge 0}\), called a storage function, functions \(\alpha _4,\alpha _5 \in \mathcal {K}_\infty \), \(\sigma \in \mathcal {K}\), and a continuous function \(\rho : \mathcal {X}\rightarrow \mathbb R_{\ge 0}\) with \(\rho (\xi ) = 0\) for all \(\xi \in {\mathcal {A}}\) such that

$$\begin{aligned} \alpha _4 (\omega (\xi ))\le & {} V(\xi ) \le \alpha _5 (\omega (\xi )) \quad \, \forall \xi \in \mathcal {X}, \end{aligned}$$
(6)
$$\begin{aligned} \langle \nabla V (\xi ),f(\xi ,u) \rangle\le & {} - \rho (\xi ) + \sigma (\left| u\right| ) \qquad \forall (\xi ,u) \in \mathcal {C}, \end{aligned}$$
(7)
$$\begin{aligned} V(g(\xi ,u)) - V(\xi )\le & {} - \rho (\xi ) + \sigma (\left| u\right| ) \qquad \forall (\xi ,u) \in \mathcal {D}. \end{aligned}$$
(8)

We note that Definition 5 subsumes Definition 2 as a special case. As we will see later (cf. Theorem 1), the existence of a storage function V plus the 0-input detectability relative to K is equivalent to the existence of an iISS Lyapunov function.

4 Main results

This section addresses equivalences for pre-iISS. Particularly, a Lyapunov characterization of pre-iISS together with other related notions is presented.

Given a set \(S \subset \mathcal {X}\times \mathcal {U}\), we denote \({\varPi }_0 (S) := \{ x \in \mathcal {X}: (x,0) \in S \}\). Here is the main result of this paper.

Theorem 1

Let \({\mathcal {A}}\subset \mathcal {X}\) be a compact set. Also, let \(\omega \) be a proper indicator for \({\mathcal {A}}\) on \(\mathcal {X}\). Suppose that the Standing Assumptions hold. Also, assume that \({\varPi }_0 (\mathcal {C}) \cup {\varPi }_0 (\mathcal {D}) = \mathcal {X}\). Then the following are equivalent

  1. (i)

    \(\mathcal {H}\) is pre-iISS with respect to \({\mathcal {A}}\).

  2. (ii)

    \(\mathcal {H}\) admits a smooth iISS Lyapunov function with respect to \((\omega ,\left| \cdot \right| )\).

  3. (iii)

    \(\mathcal {H}\) is smoothly dissipative with respect to \({\mathcal {A}}\) and the distance to \({\mathcal {A}}\) is 0-input detectable relative to \(\{ \xi \in \mathcal {X}:\rho (\xi ) = 0 \}\) with \(\rho \) as in (7) and (8).

  4. (iv)

    \(\mathcal {H}\) is 0-input pre-AS and \(\mathcal {H}\) is smoothly dissipative with respect to \({\mathcal {A}}\) with \(\rho \equiv 0\).

Proof

We show that \((ii) \Rightarrow (i)\) in Sect. 4.2. We also give a proof of the implication \((i) \Rightarrow (ii)\) in Sect. 4.3. The implication \((iv) \Rightarrow (ii)\) immediately follows from the combination of Proposition 2 (see below) and Definition 5. To see the implication \((ii) \Rightarrow (iii)\), let the iISS Lyapunov function V be a storage function with \(\rho (x) := \alpha _3 (\omega (x))\) and \(\alpha _3\) as in (4) and (5). So \(\mathcal {H}\) is smoothly dissipative. Moreover, the distance to \({\mathcal {A}}\) is 0-input detectable relative to \(\{ \xi \in \mathcal {X}:\rho (x) = 0 \}\) because \(\rho (x) = 0\) implies that \(x \in {\mathcal {A}}\). Finally the implication \((iii) \Rightarrow (iv)\) is provided as follows: Let V be a storage function. Also, assume that \(u \equiv 0\). According to [9, Theorem 23], \({\mathcal {A}}\) is 0-input pre-stable. To show 0-input pre-attractivity of \({\mathcal {A}}\), consider a complete solution pair (x, 0) to \(\mathcal {H}\), that is bounded by 0-input pre-stability of \({\mathcal {A}}\). We first note that \(\mathcal {H}\) satisfying the Standing Assumptions and \(u \equiv 0\) imply that the invariance principle for hybrid systems (e.g., Corollary 8.4 in [10]) can be applied. According to [10, Corollary 8.4], there exists some \(r \ge 0\) such that every complete solution (x, 0) to \(\mathcal {H}\) converges to the largest weakly invariant set contained in

$$\begin{aligned}&{\big \{ \xi : V(\xi ) = r \big \} \cap \big ( \rho _\mathcal {C}^{-1} (0) \cup \rho _\mathcal {D}^{-1} (0)\big )}&\end{aligned}$$
(9)

where \(\rho _\mathcal {C}^{-1} (0) := \{ \xi \in \mathcal {C}: \rho (\xi ) = 0 \}\) and \(\rho _\mathcal {D}^{-1} (0) := \{ \xi \in \mathcal {D}: \rho (\xi ) = 0 \}\). It follows from the 0-input detectability relative to \(\{ \xi \in \mathcal {X}:\rho (\xi ) = 0 \}\) that every complete solution contained in the set (9) converges to \({\mathcal {A}}\). Moreover, from (6), the only invariant set in (9) is obtained for \(r=0\). As the set (9) lies in \({\mathcal {A}}\) for \(r=0\), then \({\mathcal {A}}\) is 0-input pre-attractive. Eventually, we note that smooth dissipativity of \(\mathcal {H}\) with respect to \((\omega ,\left| \cdot \right| )\) with \(\rho \equiv 0\) is obviously satisfied. This completes the proof. \(\square \)

Remark 2

The assumption \({\varPi }_0 (\mathcal {C}) \cup {\varPi }_0 (\mathcal {D}) = \mathcal {X}\) means that the union of the flow set and the jump set generated by the disturbance-free system covers \(\mathcal {X}\). As shown in [8, Section IV], there are hybrid systems not satisfying the assumption, hybrid systems with logic variables for instance. This assumption could be relaxed at the expense of further technicalities following similar lines as in the proof of [10, Theorem 7.31]. However, we do not focus on that as it makes the proofs much more complicated without considerable appreciation.

4.1 Illustrative example

Here we verify iISS of a hybrid system using an iISS Lyapunov function. Consider a first-order integrator

$$\begin{aligned} \dot{x}_p = u , \end{aligned}$$
(10)

where \(u\in \mathbb R\) is the control input to the system. We aim to control the system using a reset controller under input constraints (i.e., \(|u| \le \overline{u}\) for some given \(\overline{u} >0\)). As shown in [18], designing a reset controller subjected to disturbances and input constraints leads to a hybrid system of the form (1) as follows

$$\begin{aligned}&\left. \begin{array}{rcl} \dot{x}_p &{} = &{} \lambda _p \arctan (x_p) + b \arctan (x_c) + w \\ \dot{x}_c &{} = &{} \lambda _c \arctan (x_c) + k \arctan (x_p) \end{array} \right\} (x,w) \in \mathcal {C}, \end{aligned}$$
(11a)
$$\begin{aligned}&\left. \begin{array}{rcl} x_p^+ &{} = &{} x_p \\ x_c^+ &{} = &{} 0 \end{array} \right\} (x,w) \in \mathcal {D}, \end{aligned}$$
(11b)

where \(x := (x_p,x_c)\) is the sate of the closed-loop system, \(w \in \mathbb R\) is the disturbance input, \(\mathcal {C}= \{ (x,w) \in \mathbb R^2 \times \mathbb R: x_p ( x_c - x_p) \le 0 \}\), \(\mathcal {D}= \{ (x,w) \in \mathbb R^2 \times \mathbb R: x_p ( x_c - x_p) \ge 0 \}\), and the constants \(b , k > 0\) and \(\lambda _p , \lambda _c < 0\) are chosen later. From \(\mathcal {D}\), the output of controller is reset to zero whenever \(x_p ( x_c - x_p) \ge 0\). Note that for sufficiently large w each solution to the system is unbounded, which shows that the system is not ISS.

Corollary 1

Consider system (11). Given \(b , k > 0\) and \(\lambda _p , \lambda _c < 0\), assume that there exist real positive numbers \(c_1,c_2>0\) such that

$$\begin{aligned} c_1 \lambda _p + b c_1 + k c_2 \le 0 , \;\; c_2 \lambda _c + k c_2 + b c_1 \le 0 . \end{aligned}$$
(12)

Take the proper indicator \(\omega (\cdot ) = \left| \cdot \right| \). Then system (11) is pre-iISS with respect to the origin.

Proof

Take the following iISS Lyapunov function candidate

$$\begin{aligned} V(x) = c_1 x_p \arctan (x_p) + c_2 x_c \arctan (x_c) . \end{aligned}$$

Obviously, V satisfies (6) for some appropriate \(\alpha _1,\alpha _2 \in \mathcal {K}_\infty \) and \(\omega (\cdot ) = \left| \cdot \right| \). Picking \((x,w) \in \mathcal {C}\), we have

$$\begin{aligned} \langle \nabla V , f(x,w) \rangle = \,&c_1 \Big [ \arctan (x_p) \big ( \lambda _p \arctan (x_p) + b \arctan (x_c) + w \big ) \\&+ \frac{x_p}{1+x_p^2} \big ( \lambda _p \arctan (x_p) + b \arctan (x_c) + w \big ) \Big ] \\&+ c_2 \Big [ \arctan (x_c) \big ( \lambda _c \arctan (x_c) + k \arctan (x_p) \big ) \\&+ \frac{x_c}{1+x_c^2} \big ( \lambda _c \arctan (x_c) + k \arctan (x_p) \big ) \Big ] . \end{aligned}$$

Using Young’s inequality and the facts that \(\left| \arctan (s)\right| \le \pi /2\) and \(\left| s\right| /(1+s^2) \le 1\) for all \(s \in \mathbb R\) give

$$\begin{aligned} \langle \nabla V , f(x,w) \rangle \le \,&\big ( c_1 \lambda _p + 0.5 b c_1 + k c_2 \big ) [\arctan (x_p)]^2 + c_1 \lambda _p \frac{x_p \arctan (x_p)}{1+x_p^2}\\&+ \frac{c_1 b}{2} \frac{x_p^2}{1+x_p^2} + \big (c_2 \lambda _c + 0.5 k c_2 + b c_1 \big ) [\arctan (x_c)]^2 \\&+ c_2 \lambda _c \frac{x_c \arctan (x_c)}{1+x_c^2} + \frac{c_2 k}{2} \frac{x_c^2}{1+x_c^2} + \frac{c_1 (\pi +1)}{2} \left| w\right| . \end{aligned}$$

From the fact that \(\frac{s^2}{1+s^2} \le [\arctan (s)]^2\) for all \(s \in \mathbb R\), we have

$$\begin{aligned}&\langle \nabla V , f(x,w) \rangle \le \big ( c_1 \lambda _p + b c_1 + k c_2 \big ) [\arctan (x_p)]^2 + c_1 \lambda _p \frac{x_p \arctan (x_p)}{1+x_p^2} \nonumber \\&\qquad + \big (c_2 \lambda _c + k c_2 + b c_1 \big ) [\arctan (x_c)]^2 + c_2 \lambda _c \frac{x_c \arctan (x_c)}{1+x_c^2} + \frac{c_1 (\pi +1)}{2} \left| w\right| \nonumber \\&\quad \le \big ( c_1 \lambda _p + b c_1 + k c_2 \big ) [\arctan (x_p)]^2 + \big (c_2 \lambda _c + k c_2 + b c_1 \big ) [\arctan (x_c)]^2 \nonumber \\&\qquad +\frac{c_1 (\pi +1)}{2} \left| w\right| . \end{aligned}$$
(13)

Now we consider jump equations on the set \(\mathcal {D}\). For any \((x,w) \in \mathcal {D}\) we get

$$\begin{aligned} V(g(x)) - V(x)&= - c_2 x_c \arctan (x_c) \\&= - \rho x_p \arctan (x_p) - c_2 x_c \arctan (x_c) + \rho x_p \arctan (x_p) , \end{aligned}$$

where \(0< \rho < c_2\). Note that \((x,w) \in \mathcal {D}\) implies that \(x_p \arctan (x_p) \le x_c \arctan (x_c)\). So we have

$$\begin{aligned} V(g(x)) - V(x)&\le - \rho x_p \arctan (x_p) - c_2 x_c \arctan (x_c) + \rho x_c \arctan (x_c) \nonumber \\&= - \rho x_p \arctan (x_p) - (c_2-\rho ) x_c \arctan (x_c) . \end{aligned}$$
(14)

It follows from (12), (13) and (14) that V is an iISS Lyapunov function for system (11). \(\square \)

Finding an iISS Lyapunov function is not always easy. Alternatively, either item (iii) or (iv) can be used to conclude the iISS property; see Sect. 5.

4.2 Proof of the implication \((ii) \Rightarrow (i)\)

Consider a solution pair (xu) to \(\mathcal {H}\). Given (4) and (5), we have

$$\begin{aligned} \langle \nabla V (x(t,j)),f(x(t,j),u(t,j)) \rangle \le - \alpha _3 (\omega (x(t,j))) + \sigma (\left| u(t,j)\right| ) \end{aligned}$$

for almost all t such that \((t,j) \in \mathrm {dom}\,x \backslash {\varGamma }(x)\); and

$$\begin{aligned} V\left( g(x(t,j),u(t,j))\right) - V(x(t,j)) \le - \alpha _3 (\omega (x(t,j))) + \sigma (\left| u(t,j)\right| ) \end{aligned}$$

for all \((t,j) \in {\varGamma }(x)\). Applying [4, Lemma IV.1] to \(\alpha _3\), there exist \(\rho _1 \in \mathcal {K}_\infty \) and \(\rho _2 \in \mathcal {L}\) such that

$$\begin{aligned} \langle \nabla V (x(t,j)),f(x(t,j),u(t,j)) \rangle \le - {\rho _1}\left( {\omega \left( x(t,j) \right) } \right) \,{\rho _2}\left( {\omega \left( x(t,j) \right) } \right) + \sigma (\left| u(t,j)\right| ) \end{aligned}$$

for almost all t such that \((t,j) \in \mathrm {dom}\,x \backslash {\varGamma }(x)\); and

$$\begin{aligned} V(g(x(t,j),u(t,j))) - V(x(t,j)) \le - {\rho _1}\left( {\omega \left( x(t,j) \right) } \right) \,{\rho _2}\left( {\omega \left( x(t,j) \right) } \right) + \sigma (\left| u(t,j)\right| ) \end{aligned}$$

for all \((t,j) \in {\varGamma }(x)\). Exploiting (3) and letting \({\tilde{\rho }} ( \cdot ) := \rho _1 \circ \alpha _2^{- 1} ( \cdot ) \rho _2 \circ \alpha _1^{- 1} ( \cdot )\) yield

$$\begin{aligned} \langle \nabla V (x(t,j)),f(x(t,j),u(t,j)) \rangle \le - {\tilde{\rho }} ( V ( \xi ) ) + \sigma (\left| u\right| ) \end{aligned}$$
(15)

for almost all t such that \((t,j) \in \mathrm {dom}\,x \backslash {\varGamma }(x)\); and

$$\begin{aligned} V(g(x(t,j),u(t,j))) - V(x(t,j)) \le - {\tilde{\rho }} ( V ( \xi ) ) + \sigma (\left| u\right| ) \end{aligned}$$
(16)

for all \((t,j) \in {\varGamma }(x)\). Define the hybrid arcs z and v by

$$\begin{aligned}&z(t,j) := V(x(t,j)) - v(t,j) , \end{aligned}$$
(17)
$$\begin{aligned}&v(t,j) := \int _{0}^{t} \sigma ( \left| u(s,i(s))\right| ) \, \mathrm {d} s + \sum _{{\begin{array}{c}(t^\prime ,j^\prime ) \in {\varGamma }(u),\\ (0,0) \preceq (t^{\prime },j^{\prime }) \prec (t,j)\end{array}}} \sigma (\left| u(t^\prime ,j^\prime )\right| ) . \end{aligned}$$
(18)

It should be pointed out that the hybrid arcs z and v are defined on the same hybrid time domain \(\mathrm {dom}\,x\) because, by the assumption, \(\mathrm {dom}\,x = \mathrm {dom}\,u\). It follows from (15),  (17) and (18) that the following hold for almost all t such that \((t,j) \in \mathrm {dom} \, z \backslash {\varGamma }(z)\)

$$\begin{aligned} \dot{z} (t,j)&\le - {\tilde{\rho }} (V(x(t,j))) = - {\tilde{\rho }} ( \max \{ z(t,j) + v(t,j) , 0 \} ) . \end{aligned}$$
(19)

From (16), (17) and (18), we have for all \((t,j) \in {\varGamma }(z)\)

$$\begin{aligned}&z (t,j + 1) - z (t,j) \le - {\tilde{\rho }} ( \max \{ z(t,j) + v(t,j) , 0 \} ) .&\end{aligned}$$
(20)

It follows from (19), (20) and Lemma 9 (see Appendix 1), there exists \(\beta \in \mathcal {KLL}\) such that

$$\begin{aligned} z(t,j) \le&\max \{ \beta (z(0,0),t,j), \left\| v_{(t,j)}\right\| _\infty \} \le \beta (z(0,0),t,j) + \left\| v_{(t,j)}\right\| _\infty&\end{aligned}$$
(21)

for all \((t,j) \in \mathrm {dom}\, z\). An immediate consequence from (17), (18), and the facts that \(z(0,0) = V(x(0,0))\) and \(\left\| v_{(t,j)}\right\| _\infty = v(t,j)\) is

$$\begin{aligned} V ( x (t,j) ) \le \,&\beta \left( {V\left( {{x(0,0)}} \right) ,t,j} \right) + 2 \int _{0}^{t} {\sigma \left( {\left| u(s,i(s))\right| } \right) ds} \\&\quad + 2 { \sum _{{\begin{array}{c}(t^\prime ,j^\prime ) \in {\varGamma }(u),\\ (0,0) \preceq (t^{\prime },j^{\prime }) \prec (t,j)\end{array}}} \sigma (\left| u(t^\prime ,j^\prime )\right| ) }&\end{aligned}$$

for all \((t,j) \in \mathrm {dom}\,x\). Exploiting (3) and denoting \({{\tilde{\beta }}} (\cdot ,\cdot ,\cdot ) \) \( :=\beta (\alpha _2 (\cdot ),\cdot ,\cdot )\), \(\gamma _1 (\cdot ) := 2\sigma (\cdot )\) and \(\gamma _2 (\cdot ) := 2\sigma (\cdot ), \alpha (\cdot ) := \alpha _1(\cdot )\) gives the conclusion

$$\begin{aligned} \alpha ( \omega (x(t,j)) ) \le \,&\, {\tilde{\beta }} \left( \omega (x(0,0)) ,t,j \right) + \int _0^t {\gamma _1 \left( {\left| u(s,i(s))\right| } \right) \mathrm {d}\,s } \\&\quad + { \sum _{{\begin{array}{c}(t^\prime ,j^\prime ) \in {\varGamma }(u),\\ (0,0) \preceq (t^{\prime },j^{\prime }) \prec (t,j)\end{array}}} \gamma _2 (\left| u(t^\prime ,j^\prime )\right| ) } .&\end{aligned}$$

4.3 Proof of the implication \((i) \Rightarrow (ii)\)

The proof is split into the following steps: (1) we recall Theorem 2 that an inflated system, say \(\mathcal {H}_\sigma \), remains pre-iISS under small enough perturbations when \(\mathcal {H}\) is pre-iISS; (2) we define an auxiliary system, say \({\hat{\mathcal {H}}}\), and then we show that some selection result holds for \({\hat{\mathcal {H}}}\) and \(\mathcal {H}\); (3) we start constructing a smooth converse iISS Lyapunov function for \(\mathcal {H}\) with providing a preliminary possibly non-smooth function, denoted by \(V_0\), and we show that \(V_0\) cannot increase too fast along solutions of \({\hat{\mathcal {H}}}\) (cf. Lemma 2 below); (4) we initially smooth \(V_0\) and obtain the partially smooth function \(V_s\) (cf. Lemma 3 below); (5) we smooth \(V_s\) on the whole state space and get the smooth function \(V_1\) (cf. Lemma 4 below); (6) we pass from the results for \({\hat{\mathcal {H}}}\) to the similar ones for \(\mathcal {H}\) (cf. Lemma 5 below); (7) we give a characterization of 0-input pre-AS (cf. Proposition 2 below); (8) finally we combine the results of Lemma 5 with those of Proposition 2 to obtain the smooth converse iISS Lyapunov function V.

Remark 3

It should be noted that the construction of a smooth converse iISS Lyapunov function follows the same steps as those in [4] but with different tools and technicalities. Particularly, the authors in [4] provided a preliminary possibly non-smooth iISS Lyapunov function and then appealed to [14, Theorem B.1] and [14, Proposition 4.2] to smooth the preliminary iISS Lyapunov function regardless robustness of iISS to sufficiently small perturbations. However, such a procedure does not necessarily hold for the case of hybrid systems as the procedure relies on uniform convergence of solutions. This is the reason that we appeal to results in [7, Sections VI.B-C], that is originally developed in [27], to smooth our preliminary iISS Lyapunov function. Toward this end, we need to establish robustness of the pre-iISS property for hybrid systems to vanishing perturbations, which is challenging and has not been previously studied in the literature.

4.3.1 Robustness of pre-iISS

Here we show robustness of pre-iISS to small enough perturbations (cf. Theorem 2 below). To be more precise, there exists an inflated hybrid system, denoted by \(\mathcal {H}_\sigma \), remaining pre-iISS under sufficiently small perturbations when the original system \(\mathcal {H}\) is pre-iISS.

Given the hybrid system \(\mathcal {H}\), a compact set \({\mathcal {A}}\subset \mathcal {X}\), and a continuous function \(\sigma :\mathcal {X}\rightarrow \mathbb R_{\ge 0}\) that is positive on \(\mathcal {X}\backslash {\mathcal {A}}\), the \(\sigma \)-perturbation of \(\mathcal {H}\), denoted by \(\mathcal {H}_\sigma \), is defined by

$$\begin{aligned}&\mathcal {H}_\sigma := \left\{ \begin{array}{lccl} \dot{\overline{x}} &{} \in &{} f_\sigma (\overline{x},u) &{} \quad (\overline{x},u) \in \mathcal {C}_\sigma \\ \overline{x}^+ &{} \in &{} g_\sigma (\overline{x},u) &{} \quad (\overline{x},u) \in \mathcal {D}_\sigma \end{array} \right.&\end{aligned}$$
(22)

where

$$\begin{aligned} f_\sigma (\overline{x},u):= & {} \overline{\mathrm {co}} f \left( (\overline{x} + \sigma (\overline{x}) \overline{\mathbb {B}},u) \cap \mathcal {C}\right) + \sigma (\overline{x}) \overline{\mathbb {B}} , \end{aligned}$$
(23)
$$\begin{aligned} g_\sigma (\overline{x},u):= & {} \big \{ z \in \mathcal {X}: z \in v + \sigma (v) \overline{\mathbb {B}} , v \in g \left( (\overline{x} + \sigma (\overline{x}) \overline{\mathbb {B}},u) \cap \mathcal {D}\right) \big \} , \end{aligned}$$
(24)
$$\begin{aligned} \mathcal {C}_\sigma:= & {} \left\{ (\overline{x},u) :(\overline{x}+\sigma (\overline{x}) \overline{\mathbb {B}},u) \cap \mathcal {C} \ne \emptyset \right\} , \end{aligned}$$
(25)
$$\begin{aligned} \mathcal {D}_\sigma:= & {} \left\{ (\overline{x},u) :(\overline{x}+\sigma (\overline{x}) \overline{\mathbb {B}},u) \cap \mathcal {D} \ne \emptyset \right\} . \end{aligned}$$
(26)

In what follows, by an admissible perturbation radius, we mean any continuous function \(\sigma :\mathcal {X}\rightarrow \mathbb R_{\ge 0}\) such that \(x+\sigma (x) \overline{\mathbb {B}} \subset \mathcal {X}\) for all \(x \in \mathcal {X}\).

Theorem 2

Let \(\mathcal {H}\) satisfy the Standing Assumptions. Let \({\mathcal {A}}\subset \mathcal {X}\) be a compact set. Assume that the hybrid system \(\mathcal {H}\) is pre-iISS with respect to \({\mathcal {A}}\). There exists an admissible perturbation radius \(\sigma :\mathcal {X}\rightarrow \mathbb R_{\ge 0}\) that is positive on \(\mathcal {X}\backslash {\mathcal {A}}\) such that the hybrid system \(\mathcal {H}_\sigma \), the \(\sigma \)-perturbation of \(\mathcal {H}\), is pre-iISS with respect to \({\mathcal {A}}\), as well.

Proof

See Appendix 1. \(\square \)

Remark 4

Besides the contribution of Theorem 2 to proof of our main result, it is of independent interest. We note that model (22) arises in many practical cases. For instance, assume that \(\mathcal {H}\) is pre-iISS. Different types of perturbations such as slowly varying parameters, singular perturbations, highly oscillatory signals to \(\mathcal {H}\) provide a perturbed system which may be modeled by (22) (cf. [3, 16, 28] for more details). Theorem 2 guarantees pre-iISS of the perturbed system under the certain conditions.

4.3.2 The auxiliary system \(\hat{\mathcal {H}}\) and the associated properties

We need to define the following auxiliary system \(\hat{\mathcal {H}}\). Assume that \(\mathcal {H}\) is pre-iISS with respect to \({\mathcal {A}}\) satisfying (2) with suitable functions \(\alpha \), \({{\tilde{\beta }}}\), \(\gamma _1\), \(\gamma _2\). Pick any \(\varphi \in \mathcal {K}_\infty \) with \(\max \{ \gamma _1 \circ \varphi (s), \gamma _2 \circ \varphi (s) \} \le \alpha (s)\) for all \(s \in \mathbb R_{\ge 0}\). Define the following hybrid inclusion

$$\begin{aligned}&\hat{\mathcal {H}} := \left\{ \begin{array}{lccl} \dot{x} &{} \in &{} \hat{F} (x) &{} \qquad x \in \hat{\mathcal {C}} \\ x^+ &{} \in &{} \hat{G} (x) &{} \qquad x \in \hat{\mathcal {D}} \end{array} \right.&\end{aligned}$$
(27)

where

$$\begin{aligned} \begin{array}{rcl} \hat{F} (x) &{} := &{} \left\{ \nu \in \mathbb R^{n}:\nu \in f \left( x , u \right) , u \in \mathcal {U} \cap \varphi (\omega (x)) \overline{\mathbb {B}} \text { and } (x,u) \in \mathcal {C} \right\} , \\ \hat{G} (x) &{} := &{} \left\{ \nu \in \mathcal {X}:\nu \in g \left( x , u \right) , u \in \mathcal {U} \cap \varphi (\omega (x)) \overline{\mathbb {B}} \text { and } (x,u) \in \mathcal {D} \right\} , \\ {\hat{\mathcal {C}}} &{} := &{} \left\{ x \in {\mathcal {X}} :\exists \, u \in \mathcal {U} \cap \varphi (\omega (x)) \, \overline{\mathbb {B}} \text { such that } (x,u) \in \mathcal {C} \right\} , \\ {\hat{\mathcal {D}}} &{} := &{} \left\{ x \in {\mathcal {X}} :\exists \, u \in \mathcal {U} \cap \varphi (\omega (x)) \, \overline{\mathbb {B}} \text { such that } (x,u) \in \mathcal {D} \right\} . \end{array} \end{aligned}$$
(28)

The hybrid inclusion (27) is denoted by \(\hat{\mathcal {H}} := (\hat{F}, \hat{G}, {\hat{\mathcal {C}}} , {\hat{\mathcal {D}}},\mathcal {O})\) where \(\mathcal {O} = {\hat{\mathcal {C}}} \cup {\hat{\mathcal {D}}}\). We note that \(\mathcal {O} = \mathcal {X}\) because \(\mathcal {X}\supset \mathcal {O} = {\hat{\mathcal {C}}} \cup {\hat{\mathcal {D}}} \supset {\varPi }_0 (\mathcal {C}) \cup {\varPi }_0 (\mathcal {D}) = \mathcal {X}\). We also note that \(\hat{F}(x) = \overline{\mathrm {co}} \, \hat{F}(x)\) for each \(x \in {\hat{\mathcal {C}}}\) and the data of \({\hat{\mathcal {H}}}\) satisfy the Hybrid Basic Conditions (cf. Assumption 6.5 in [10]). To distinguish maximal solutions to \({\hat{\mathcal {H}}}\) from those to \(\mathcal {H}\), we denote a maximal solution to \({\hat{\mathcal {H}}}\) starting from \(\xi \) by \(x_\varphi (\cdot ,\cdot ,\xi )\). Let \({\hat{\varrho }} (\xi )\) denote the set of all maximal solutions of \({\hat{\mathcal {H}}}\) starting from \(\xi \in \mathcal {X}\).

We first relate solutions to \(\mathcal {H}\) to those to \({\hat{\mathcal {H}}}\) using the following claim whose proof follows from similar lines as in the proof of [6, Claim 3.7] with minor modifications.

Claim 1

Assume that \(\mathcal {H}\) is pre-forward complete. For each solution x to \({\hat{\mathcal {H}}}\), there exists a hybrid input u such that (xu) is a solution pair to \(\mathcal {H}\) with \(\left| u(t,j)\right| \le \varphi (\omega (x(t,j)))\) for all \((t,j) \in \mathrm {dom}x\).

The following lemma assures that \({\hat{\mathcal {H}}}\) is pre-forward complete.

Lemma 1

Pre-iISS of \(\mathcal {H}\) implies that there exists \(\varphi \in \mathcal {K}_\infty \) such that \(\hat{\mathcal {H}}\) is pre-forward complete.

Proof

Let \(d :\mathrm {dom} \, d \rightarrow \overline{\mathbb {B}}\) be a hybrid input with \(\mathrm {dom} \, d = \mathrm {dom} \, x\) such that \(d \in \mathcal {M}\), where

$$\begin{aligned} \mathcal {M} := \Big \{&d \in \overline{\mathbb {B}} :\Big (x(t,j),\varphi \big (\omega (x(t,j))\big ) d(t,j)\Big ) \in \mathcal {C}\cup \mathcal {D}\quad \forall (t,j) \in \mathrm {dom}\, x \Big \} .&\end{aligned}$$

By the definition of \(\hat{\mathcal {H}}\), Claim 1, the pre-iISS assumption of \(\mathcal {H}\) and the fact that \(\max \{ \gamma _1 \circ \varphi (s), \gamma _2\circ \varphi (s) \} \le \alpha (s)\) for all \(s \in \mathbb {R}_{\ge 0}\), for each solution \(x_\varphi \) to \(\hat{\mathcal {H}}\), there exists a solution pair \((x_\varphi ,\varphi (\omega (x_{\varphi }))d)\) to \(\mathcal {H}\) with \(d \in \mathcal {M}\) such that the following hold

$$\begin{aligned} \alpha (\omega (x_\varphi (t,j,\xi ))) \le&\, {\tilde{\beta }} (\omega (\xi ),t,j) + \int _0^t \gamma _{1}(\left| d(s,i(s))\right| \varphi (\omega (x_{\varphi }(s,i(s),\xi )) )) d s&\\&+ {\sum _{{\begin{array}{c}(t^\prime ,j^\prime ) \in {\varGamma }(x_{\varphi }),\\ (0,0) \preceq (t^{\prime },j^{\prime }) \prec (t,j)\end{array}}} \gamma _2 (\left| d(t^\prime ,j^\prime )\right| \varphi (\omega (x_{\varphi }(t^\prime ,j^\prime ,\xi )) ))}&\\ \le&\, {\tilde{\beta }}_0 (\omega (\xi )) + \int _0^t \alpha (\omega (x_\varphi (s,i(s),\xi )) ) \mathrm {d}s \\&+\, {\sum _{{\begin{array}{c}(t^\prime ,j^\prime ) \in {\varGamma }(x_{\varphi }),\\ (0,0) \preceq (t^{\prime },j^{\prime }) \prec (t,j)\end{array}}} \alpha (\omega (x_{\varphi }(t^\prime ,j^\prime ,\xi )))} \end{aligned}$$

where \({\tilde{\beta }}_0 (\cdot ) := {\tilde{\beta }}(\cdot ,0,0)\). It follows with [20, Proposition 1] that

$$\begin{aligned} \alpha (\omega (x_{\varphi }(t,j,\xi ))) \le {\tilde{\beta }}_0 (\omega (\xi )) e^{t+j} \qquad \forall (t,j) \in \mathrm {dom} \, x . \end{aligned}$$

Therefore, the maximal solution x is bounded if the corresponding hybrid domain is compact. It shows that every maximal solution of x is either bounded or complete. \(\square \)

The following hybrid inclusion is defined by

$$\begin{aligned}&{\hat{\mathcal {H}}}_\sigma := \left\{ \begin{array}{lccl} \dot{\overline{x}} &{} \in &{} \hat{F}_\sigma (\overline{x}) &{} \quad \overline{x} \in {\hat{\mathcal {C}}}_\sigma \\ \overline{x}^+ &{} \in &{} \hat{G}_\sigma (\overline{x}) &{} \quad \overline{x} \in {\hat{\mathcal {D}}}_\sigma \end{array} \right.&\end{aligned}$$

where

$$\begin{aligned} \hat{F}_\sigma (\overline{x}):= & {} \big \{ \nu \in \mathbb R^{n}:\nu \in f_\sigma \left( \overline{x} , u \right) , u \in \mathcal {U} \cap \varphi (\omega (\overline{x})) \overline{\mathbb {B}} \;\text {and}\; (\overline{x},u) \in \mathcal {C}_\sigma \big \} , \\ \hat{G}_\sigma (\overline{x}):= & {} \big \{ \nu \in {\mathcal {X}} :\nu \in g_\sigma \left( \overline{x} , u \right) , u \in \mathcal {U} \cap \varphi (\omega (\overline{x})) \overline{\mathbb {B}} \;\text {and}\; (\overline{x},u) \in \mathcal {D}_{\sigma } \big \} ,\\ {\hat{\mathcal {C}}}_\sigma:= & {} \big \{ \overline{x} \in {\mathcal {X}} :\exists u \in \mathcal {U} \cap \varphi (\omega (\overline{x})) \text { such that } (\overline{x},u) \in \mathcal {C}_\sigma \big \} , \\ {\hat{\mathcal {D}}}_\sigma:= & {} \big \{ \overline{x} \in {\mathcal {X}} :\exists u \in \mathcal {U} \cap \varphi (\omega (\overline{x})) \text { such that } (\overline{x},u) \in \mathcal {D}_\sigma \big \} . \end{aligned}$$

that is extended from \(\hat{\mathcal {H}}\). We denote \(\hat{\mathcal {H}}_\sigma \) by \((\hat{F}_\sigma ,\hat{G}_\sigma ,{\hat{\mathcal {C}}}_\sigma ,{\hat{\mathcal {D}}}_\sigma ,\mathcal {X})\). Since \(\sigma \) is an admissible perturbation radius, \({\hat{\mathcal {C}}}_\sigma \cup {\hat{\mathcal {D}}}_\sigma = {\hat{\mathcal {C}}} \cup {\hat{\mathcal {D}}}\). A maximal solution to \(\hat{\mathcal {H}}_\sigma \) starting from \(\overline{\xi }\) is denoted by \(\overline{x}_\varphi (\cdot ,\cdot ,\xi )\). Let \(\hat{\varrho }_\sigma (\xi )\) denote the set of all maximal solution to \(\hat{\mathcal {H}}_\sigma \) starting from \(\xi \in \mathcal {X}\). It is straightforward to see the combination of Lemma 1 and Theorem 2 ensures that \(\hat{\mathcal {H}}_{\sigma }\) is pre-forward complete.

Corollary 2

Pre-iISS of \(\mathcal {H}_\sigma \) implies that there exists \(\varphi \in \mathcal {K}_\infty \) such that \({\hat{\mathcal {H}}}_\sigma \) is pre-forward complete.

It should be pointed out that, by [7, Proposition 3.1], \({\hat{\mathcal {H}}}_\sigma \) satisfies the Standing Assumptions as long as \(\hat{\mathcal {H}}\) satisfies the same conditions and \(\sigma \) is an admissible perturbation radius.

4.3.3 The preliminary function \(V_0\)

We start constructing the smooth converse iISS Lyapunov function with giving a possibly non-smooth function \(V_0\). Before proceeding to the main result of this subsection, we define the following set. Consider a hybrid signal \(d :\mathrm {dom} \, d \rightarrow \overline{\mathbb {B}}\) with \(\mathrm {dom} \, d = \mathrm {dom} \, \overline{x}\) such that \(d \in \overline{\mathcal {M}}\), where

$$\begin{aligned} \overline{\mathcal {M}} := \big \{ d \in \overline{\mathbb {B}} :(\overline{x}(t,j),\varphi (\omega (\overline{x}(t,j))) d(t,j)) \in \mathcal {C}_\sigma \cup \mathcal {D}_{\sigma } \quad \forall (t,j) \in \mathrm {dom}\, \overline{x} \big \} . \end{aligned}$$

Lemma 2

Let \({\mathcal {A}} \subset {\mathcal {X}}\) be a compact set. Also, let \(\sigma :{\mathcal {X}} \rightarrow \mathbb R_{\ge 0}\) be an admissible perturbation radius that is positive on \({\mathcal {X}} \backslash {\mathcal {A}}\). Let \(\omega \) be a proper indicator on \(\mathcal {X}\) for \({\mathcal {A}}\). Assume that \(\mathcal {H}_\sigma \) is pre-iISS with respect to \({\mathcal {A}}\) satisfying (2) with suitable functions \(\alpha \in \mathcal {K}_\infty \)\(\overline{\beta } \in \mathcal {KLL}\), \(\overline{\gamma }_1,\overline{\gamma }_2 \in \mathcal {K}\). Let \(\varphi \in \mathcal {K}_\infty \) such that \(\max \{ \overline{\gamma }_1 \circ \varphi (s), \overline{\gamma }_2 \circ \varphi (s) \} \le \overline{\alpha }(s)\) for all \(s \in \mathbb R_{\ge 0}\). Then there exists a function \(V_0 :{\mathcal {X}} \rightarrow \mathbb R_{\ge 0}\) defined by

$$\begin{aligned} V_0 (\xi ) = \sup \big \{ z(t,j,\xi ,d) :(t,j) \in \mathrm {dom} \, \overline{x}_{\varphi } \, , \, d \in \overline{\mathcal {M}} \big \} \end{aligned}$$
(29)

where for each \(\xi \in {\mathcal {X}}\) and \(d \in \overline{\mathcal {M}}\), \(z(\cdot ,\cdot ,\xi ,d)\) is defined by

$$\begin{aligned} z(t,j,\xi ,d) :=&\alpha ( \omega ( \overline{x}_{\varphi } (t,j,\xi ) ) ) - \int _0^t \overline{\gamma }_1 (\left| d(s,i(s))\right| \, \varphi ( \omega ( \overline{x}_\varphi (s,i(s),\xi ) ) ) ) \mathrm {d} s&\nonumber \\&- {\sum _{{\begin{array}{c}(t^\prime ,j^\prime ) \in {\varGamma }(\overline{x}_{\varphi }),\\ (0,0) \preceq (t^{\prime },j^{\prime }) \prec (t,j)\end{array}}} \overline{\gamma }_2 (\left| d(t^\prime ,j^\prime )\right| \varphi (\omega (\overline{x}_{\varphi }(t^\prime ,j^\prime ,\xi )) ))} \end{aligned}$$
(30)

such that

$$\begin{aligned}&\alpha ( \omega (\xi ) ) \le V_0 (\xi ) \le \overline{\beta }_0 (\omega (\xi )) \quad \quad \forall \xi \in {\mathcal {X}} , \,\mathrm { and } \;\,\overline{\beta }_0 (\cdot ) := \overline{\beta }(\cdot ,0,0) ,&\end{aligned}$$
(31)
$$\begin{aligned}&V_0 (\overline{x}_\varphi (h,0,\xi )) - V_0 (\xi ) \le \int _0^h \overline{\gamma }_1 (\left| \mu \right| \, \varphi ( \omega ( \overline{x}_\varphi (s,0,\xi ) ) ) ) \mathrm {d} s \nonumber \\&\qquad \qquad \qquad \qquad \forall \xi \in \hat{\mathcal {C}} \backslash {\mathcal {A}}, \left| \mu \right| \le 1, \overline{x}_\varphi \in \hat{\varrho }_\sigma (\xi ) \,\mathrm { with }\, (h,0) \in \mathrm {dom}\, \overline{x}_\varphi ,&\end{aligned}$$
(32)
$$\begin{aligned}&V_0 (g) - V_0 (\xi ) \le \overline{\gamma }_2 (\left| \mu \right| \varphi (\omega (\xi ))) \;\; \forall \xi \in \hat{\mathcal {D}} , g \in \hat{G}(\xi ) , \left| \mu \right| \le 1 .&\end{aligned}$$
(33)

The proof of the lemma is not presented due to space constraints. However, it follows the same arguments given in the proof of \(2 \Rightarrow 1\) in  [4, Theorem 1] and the proof of [1, Theorem 1]. We refer the reader to [19] for more details.

4.3.4 Initial smoothing

Here we construct a partially smooth function on \({\mathcal {X}}\) from \(V_{0}\).

Lemma 3

Let \({\mathcal {A}}\subset {\mathcal {X}}\) be a compact set. Also, let \(\sigma :{\mathcal {X}} \rightarrow \mathbb R_{\ge 0}\) be an admissible perturbation radius that is positive on \({\mathcal {X}} \backslash {\mathcal {A}}\). Let \(\omega \) be a proper indicator on \(\mathcal {X}\) for \({\mathcal {A}}\). Assume that \(\mathcal {H}_\sigma \) is pre-iISS with respect to \({\mathcal {A}}\). Then for any \(\xi \in \mathcal {X}\) and \(\left| \mu \right| \le 1\), there exist \(\underline{\alpha }_s, \overline{\alpha }_s, {{\tilde{\gamma }}}_1 , {{\tilde{\gamma }}}_2 \in \mathcal {K}_\infty \), and a continuous function \(V_s :\mathcal {X}\rightarrow \mathbb R_{\ge 0}\), smooth on \(\mathcal {X}\backslash {\mathcal {A}}\), such that

$$\begin{aligned} \underline{\alpha }_s ( \omega (\xi ) ) \le V_s (\xi )\le & {} \overline{\alpha }_{s} (\omega (\xi )) \qquad \;\;\,\quad \forall \xi \in \mathcal {X}, \\ \max _{f \in \hat{F}(\xi )} \langle \nabla V_{s} (\xi ),f \rangle\le & {} {\tilde{\gamma }}_1 (\left| \mu \right| \varphi (\omega (\xi ))) \quad \forall \xi \in \hat{\mathcal {C}} \backslash {\mathcal {A}} , \\ \max _{g \in \hat{G}(\xi )} V_s (g) - V_s (\xi )\le & {} {\tilde{\gamma }}_{2} (\left| \mu \right| \varphi (\omega (\xi ))) \quad \forall \xi \in \hat{\mathcal {D}} . \end{aligned}$$

Proof

Let the functions \(V_0\), \(\alpha \), \(\overline{\beta }\), \(\overline{\gamma }_1\), \(\overline{\gamma }_2\) and \(\varphi \) come from Lemma 2. We begin with giving the following property of \(V_0\) whose proof follows from the similar arguments as those in [7, Proposition 7.1] with essential modifications.

Proposition 1

The function \(V_0\) is upper semi-continuous on \(\mathcal {X}\).

To prove the lemma, we follow the same approach as the one in [7, Section VI.B] to construct a partially smooth function \(V_s\) from \(V_0\). Let \(\psi :\mathbb R^{n}\rightarrow [0,1]\) be a smooth function which vanishes outside of \(\overline{\mathbb {B}}\) satisfying \(\int \psi (\xi ) \mathrm {d} \xi = 1\) where the integration (throughout this subsection) is over \(\mathbb R^{n}\). We find a partially smooth and sufficiently small function \({\tilde{\sigma }} :\mathcal {X}\backslash {\mathcal {A}}\rightarrow \mathbb R_{> 0}\) and define the function \(V_s :\mathcal {X}\rightarrow \mathbb R_{\ge 0}\) by

$$\begin{aligned} V_s (\xi ) := \left\{ \begin{array}{l} 0 \quad \mathrm {for} \; \xi \in {\mathcal {A}}, \\ \int V_{0} (\xi + {\tilde{\sigma }} (\xi ) \eta ) \psi (\eta ) \mathrm {d} \eta \quad \; \mathrm {for} \; \xi \in \mathcal {X}\backslash {\mathcal {A}}. \\ \end{array} \right. \end{aligned}$$
(34)

so that some desired properties [cf. items (a), (b) and (c) below] are met. In other words, we find an appropriate \({{\tilde{\sigma }}}\) such that the following are obtained

  1. (a)

    The function \(V_s\) is well-defined, continuous on \(\mathcal {X}\), smooth and positive on \(\mathcal {X}\backslash {\mathcal {A}}\);

  2. (b)

    as much as possible for some \(\underline{\alpha }_s,\overline{\alpha }_s \in \mathcal {K}_\infty \) the following conditions hold

    $$\begin{aligned}&V_s (\xi ) |_{\xi \in {\mathcal {A}}} = 0,&\end{aligned}$$
    (35)
    $$\begin{aligned}&\underline{\alpha }_s (\omega (\xi )) \le V_s (\xi ) \le \overline{\alpha }_s (\omega (\xi )) \qquad \forall \xi \in \mathcal {X};&\end{aligned}$$
    (36)
  3. (c)

    for some \({\tilde{\gamma }}_1,{\tilde{\gamma }}_2 \in \mathcal {K}_\infty \), it holds that

    $$\begin{aligned} \max _{f \in \hat{F}(\xi )} \langle \nabla V_s (\xi ),f \rangle&\le {\tilde{\gamma }}_1 (\left| \mu \right| \varphi (\omega (\xi )) ) \qquad \forall \xi \in \hat{\mathcal {C}} \backslash {\mathcal {A}},&\end{aligned}$$
    (37)
    $$\begin{aligned} \max _{g \in \hat{G}(\xi )} V_s (g) - V_s (\xi )&\le {\tilde{\gamma }}_{2} (\left| \mu \right| \varphi (\omega (\xi ))) \qquad \forall \xi \in \hat{\mathcal {D}} .&\end{aligned}$$
    (38)

Regarding (a), we appeal to [13, Theorem 3.1] to achieve the desired properties. This theorem requires that \(V_0 (\xi ) |_{\xi \in {\mathcal {A}}} = 0\), which is shown in the previous subsection, \(V_0\) is upper semi-continuous on \(\mathcal {X}\), which is established by Proposition 1, and the openness of \(\mathcal {X}\backslash {\mathcal {A}}\), which is guaranteed by [7, Lemma 7.5].

Regarding (b), the property (35) follows from the definition of \(V_s\), the upper semi-continuity of \(V_0\), and the openness of \(\mathcal {X}\backslash {\mathcal {A}}\). Also, it follows from [7, Lemma 7.7] that we can pick the function \({{\tilde{\sigma }}}\) sufficiently small such that for any \(\mu _1,\mu _2 \in \mathcal {K}_\infty \) satisfying

$$\begin{aligned}&\mu _1 (s)< s < \mu _2 (s) \qquad \forall s \in \mathbb R_{> 0},&\end{aligned}$$
(39)

the following hold

$$\begin{aligned}&\alpha (\mu _1 (\omega (\xi )))< V_s (\xi ) < \overline{\beta }_0 (\mu _2 (\omega (\xi ))) \qquad \forall \xi \in \mathcal {X}.&\end{aligned}$$
(40)

So the inequalities (36) are obtained, as well.

Regarding (c), let \(\sigma _2\) be a continuous function that is positive on \(\mathcal {X}\backslash {\mathcal {A}}\) and that satisfies \(\sigma _2 (\xi ) \le \sigma (\xi ) \) for all \(\xi \in {\mathcal {X}}\). We first construct functions \(\sigma _2\) and \({\tilde{\sigma }}\) so that for each \(\xi \in {\mathcal {X}}\backslash {\mathcal {A}}\), for each \(\overline{x}_\varphi \in \hat{\varrho }_{\sigma _2} (\xi )\), for each \(\eta \in \overline{\mathbb {B}}\) and \((t,j) \in \mathrm {dom}\,\overline{x}_\varphi \) such that \(\overline{x}_\varphi (t,j,\xi ) \in {\mathcal {X}} \backslash {\mathcal {A}}\), the function defined on \((t,j) \in \mathrm {dom}\,\overline{x}_\varphi \cap [0,t] \times \{0,\dots ,j\}\) given by \((\tau ,k) \mapsto \overline{x}_\varphi (\tau ,k) + {\tilde{\sigma }} (\overline{x}_\varphi (\tau ,k)) \eta \) can be extended to a complete solution of \(\hat{\mathcal {H}}_\sigma \). Now, pick a maximal solution \(\overline{x}_\varphi (h,m,\xi )\) to \(\hat{\mathcal {H}}_{\sigma _2}\). First, let \(m = 0\). So according to the definition of \(V_s\), Lemma 7.2 in [7], (32) and the fact \(\psi :\mathbb R^{n}\rightarrow [0,1]\) that we get for any \(\left| \mu \right| \le 1\) and for any \(\overline{x}_{\varphi } \in \hat{\varrho }_{\sigma _2} (\xi )\) so that \(\xi \in \hat{\mathcal {C}} \backslash {\mathcal {A}}\)

$$\begin{aligned} V_s (\overline{x}_\varphi (h,0,\xi ))&\le V_s ( \xi ) + \int \Big \{ \int _0^h \overline{\gamma }_1 (\left| \mu \right| \varphi (\omega (\overline{x}_\varphi (s,0,\xi ) + {\tilde{\sigma }} (\overline{x}_\varphi (s,0,\xi ))\eta ))) \mathrm {d} s \Big \}\nonumber \\&\quad \,\times \psi (\eta ) \mathrm {d} \eta . \end{aligned}$$
(41)

It follows from [7, Claim 6.3] that for any \(\xi \in \hat{\mathcal {C}} \backslash {\mathcal {A}}\) and \(f \in \hat{F}(\xi )\), there exists a solution \(\overline{x}_\varphi \in \hat{\varrho }_{\sigma _2} (\xi )\) such that for small enough \(h > 0\), we get that \((h,0) \in \mathrm {dom} \, \overline{x}_\varphi \) and \(\overline{x}_\varphi = \xi + h f\). So it follows with smoothness of \(V_s\) on \({\mathcal {X}} \backslash {\mathcal {A}}\), Claim 6.3 in [7], the inequality (41) and the mean value theorem that

$$\begin{aligned} \left\langle {\nabla V_s , f} \right\rangle =&\lim _{h \rightarrow 0^{+}} \frac{V_s (\xi + h f) - V_s (\xi )}{h}&\\ \le&\lim _{h \rightarrow 0^+} \int \overline{\gamma }_1 (\left| \mu \right| \varphi (\omega (z + {\tilde{\sigma }} (z)\eta ))) \psi (\eta ) \mathrm {d} \eta .&\end{aligned}$$

where z lies in the line segment joining \(\xi \) to \(\xi + h f\). It follows from uniform continuity of \(\omega \) with respect to \(\eta \) on \(\overline{\mathbb {B}}\) that for any \(\xi \in \hat{\mathcal {C}} \backslash {\mathcal {A}}\) and \(f \in \hat{F}(\xi )\)

$$\begin{aligned} \left\langle {\nabla V_s , f} \right\rangle \le&\int \overline{\gamma }_1 (\left| \mu \right| \varphi (\omega (\xi + {\tilde{\sigma }} (\xi )\eta ))) \psi (\eta ) \mathrm {d} \eta&\\ \le&\sup _{z \in \xi + {\tilde{\sigma }} (\xi ) \overline{\mathbb {B}}} \overline{\gamma }_1 (\left| \mu \right| \varphi (\omega (z))) .&\end{aligned}$$

From Claim 7.6 and Lemma 7.7 in [7], there exists some \(\sigma _u (\cdot )\) with \({\tilde{\sigma }} (\xi ) \le \sigma _{u} (\xi )\) for all \(\xi \in {\mathcal {X}} \backslash {\mathcal {A}}\) so that we get for all \(\xi \in \hat{\mathcal {C}} \backslash {\mathcal {A}}\) and \(f \in \hat{F}(\xi )\)

$$\begin{aligned} \left\langle {\nabla V_s , f} \right\rangle&\le \sup _{z \in \xi + \sigma _u (\xi ) \overline{\mathbb {B}}} \overline{\gamma }_1 (\left| \mu \right| \varphi (\omega (z)))&\nonumber \\&\le \overline{\gamma }_1 (\left| \mu \right| \varphi (\mu _2 (\omega (\xi )))) . \end{aligned}$$
(42)

Therefore, it is easy to see that for any \(\overline{\gamma }_1 , \varphi , \mu \in \mathcal {K}_\infty \) with \(\mu _2 > {{\mathrm{id}}}\) and any \(\left| \mu \right| \le 1\) the exists \({{\tilde{\gamma }}}_1 \in \mathcal {K}_\infty \) such that (37) holds.

Now let \((h,m) = (0,1)\). So it follows with the definition of \(V_s\), Lemma 7.2 in [7], the growth condition (33), and the fact that \(\psi :\mathbb R^{n}\rightarrow [0,1]\) that for any \(\left| \mu \right| \le 1\) and each \(\xi \in \hat{\mathcal {D}}\) and \(g \in \hat{G}(\xi )\)

$$\begin{aligned} V_{s} (x_{\varphi } (0,1,\xi ))&\le V_s (\xi ) + \int \overline{\gamma }_{2} (\left| \mu \right| \varphi (\omega (\xi + {\tilde{\sigma }} (\xi ) \eta ))) \psi (\eta ) \mathrm {d} \eta&\\&\le V_{s} (\xi ) + \sup _{z \in \xi + {\tilde{\sigma }} (\xi ) \overline{\mathbb {B}}} \overline{\gamma }_{2} (\left| \mu \right| \varphi (\omega (z))) .&\end{aligned}$$

From [7, Claim 7.6] and [7, Lemma 7.7], there exists \(\sigma _{u}\) with \({\tilde{\sigma }} (\xi ) \le \sigma _{u} (\xi )\) for all \(\xi \in {\mathcal {X}} \backslash {\mathcal {A}}\) so that we have for all \(\xi \in \mathcal {D} \backslash {\mathcal {A}}\) and \(g \in \hat{G}(\xi )\)

$$\begin{aligned} V_{s} (g)&\le V_{s} (\xi ) + \sup _{z \in \xi + {\tilde{\sigma }} (\xi ) \overline{\mathbb {B}}} \overline{\gamma }_{2} (\left| \mu \right| \varphi (\omega (z)))&\nonumber \\&\le V_{s} (\xi ) + \sup _{z \in \xi + \sigma _{u} (\xi ) \overline{\mathbb {B}}} \overline{\gamma }_{2} (\left| \mu \right| \varphi ( \omega (z)))&\nonumber \\&\le V_{s} (\xi ) + \overline{\gamma }_{2} (\left| \mu \right| \varphi (\mu _{2}(\omega (\xi )))) .&\end{aligned}$$
(43)

With the same arguments as those for flows, there exists \({\tilde{\gamma }}_2 \in \mathcal {K}_\infty \) such that the following hold

$$\begin{aligned} V_{s} (g) \le V_{s} (\xi ) + {\tilde{\gamma }}_{2} (\left| \mu \right| \varphi (\omega (\xi ))) . \end{aligned}$$

Moreover, if \(\xi \in \hat{\mathcal {D}}\) and \(g \in {\mathcal {A}}\) then \(0 = V_{s} (g) \le V_{s} (\xi ) + {\tilde{\gamma }}_2 (\left| \mu \right| \varphi (\omega (\xi )))\). So the growth condition (38) holds.\(\square \)

4.3.5 Final smoothing

The next lemma is to do with smoothing \(V_{s}\) on \({\mathcal {A}}\).

Lemma 4

Let \(\mathcal {H}\) be pre-iISS. Also, let \(V_{s}\), \({\tilde{\gamma }}_{1},{\tilde{\gamma }}_{2}\) and \(\varphi \) come from Lemma 3. For any \(\xi \in {\mathcal {X}}\) and \(\left| \mu \right| \le 1\), there exist \(\underline{\alpha }, \overline{\alpha } \in \mathcal {K}_\infty \), and a \(\mathcal {K}_\infty \)-function p, smooth on \((0,+\infty )\) such that \(V_1 :{\mathcal {X}} \rightarrow \mathbb R_{\ge 0}\) is defined by

$$\begin{aligned} V_{1} (\xi ) := p ( V_{s} (\xi )) \qquad \qquad \forall \xi \in {\mathcal {X}} \end{aligned}$$
(44)

where \(V_s\), coming from Lemma 3, is smooth on \({\mathcal {X}}\) and the following hold

$$\begin{aligned} \underline{\alpha } ( \omega (\xi ) ) \le V_{1} (\xi )\le & {} \overline{\alpha } ( \omega (\xi ) ) \;\;\;\;\,\qquad \qquad \forall \xi \in {\mathcal {X}} , \end{aligned}$$
(45)
$$\begin{aligned} \max _{f \in \hat{F}(\xi )} \langle \nabla V_{1} (\xi ),f \rangle\le & {} {\tilde{\gamma }}_{1} (\left| \mu \right| \varphi (\omega (\xi ))) \qquad \forall \xi \in \hat{\mathcal {C}} , \end{aligned}$$
(46)
$$\begin{aligned} \max _{g \in \hat{G}(\xi )} V_{1} (g) - V_{1} (\xi )\le & {} {\tilde{\gamma }}_{2} (\left| \mu \right| \varphi (\omega (\xi ))) \qquad \forall \xi \in \hat{\mathcal {D}} . \end{aligned}$$
(47)

Proof

With Lemma 4.3 in [14], there exists a smooth function \(p \in \mathcal {K}_\infty \) such that \(p' (s) > 0\) for all \(s > 0\) where \(p' (\cdot ) := \frac{dp}{ds} (\cdot )\) and \(p(V_{s} (\xi ))\) is smooth for all \(\xi \in {\mathcal {X}}\). Without loss of generality, one can assume that \(p' (s) \le 1\) for all \(s > 0\) (cf. Page 1090 of [4] for more details). Using the definition of \(V_1\) and (40), we have

$$\begin{aligned} p \circ \alpha \circ \mu _1 ( \omega (\xi ) ) \le V_1 (\xi ) \le p \circ \overline{\beta }_0 \circ \mu _2 ( \omega (\xi ) ) \qquad \forall \xi \in \mathcal {X}. \end{aligned}$$
(48)

Therefore, (45) holds.

It follows from, in succession, the definition of \(V_1\), (37) and the fact that \(0 < p' (s) \le 1\) for all \(s > 0\) that for all \(\xi \in \hat{\mathcal {C}} \backslash {\mathcal {A}}\)

$$\begin{aligned} \max _{f \in \hat{F}(\xi )} \langle \nabla V_{1} (\xi ),f \rangle&\le p' (V_{2}) {\tilde{\gamma }}_{1} (\left| \mu \right| \varphi (\omega (\xi ))) \le {\tilde{\gamma }}_{1} (\left| \mu \right| \varphi (\omega (\xi ))) . \end{aligned}$$

It follows with the fact that \(\nabla V_1 (\xi ) = 0\) and \(\omega (\xi ) = 0\) for all \(\xi \in {\mathcal {A}}\), and \({\tilde{\gamma }}\) and \(\varphi \) are zero at zero that

$$\begin{aligned} \max _{f \in \hat{F}(\xi )} \langle \nabla V_1 (\xi ),f \rangle \le {\tilde{\gamma }}_1 (\left| \mu \right| \varphi (\omega (\xi ))) \qquad \forall \xi \in \hat{\mathcal {C}} . \end{aligned}$$

It follows with, in succession, the definition of \(V_1\), the mean value theorem, the last inequality of (43), the fact that \(0 < p' (s) \le 1\) for all \(s > 0\) that for all \(\xi \in \hat{\mathcal {D}}\)

$$\begin{aligned} V_1 (g) - V_1 (\xi ) = p'(z) (V_s (g) - V_s (\xi )) \le {\tilde{\gamma }}_2 (\left| \mu \right| \varphi (\omega (\xi ))) \end{aligned}$$

where z lies on the segment joining \(V_s (\xi )\) to \(V_s (g)\). \(\square \)

4.3.6 Return to \(\mathcal {H}\)

The following lemma is immediately obtained from Lemma 4 and (28).

Lemma 5

Let \(\mathcal {H}\) be pre-iISS. Let \(\varphi ,{\tilde{\gamma }}_1,{\tilde{\gamma }}_2 \in \mathcal {K}_\infty \) be generated by Lemma 3. Also, let \(\underline{\alpha } , \overline{\alpha } \in \mathcal {K}_\infty \) and \(V_1 :{\mathcal {X}} \rightarrow \mathbb R_{\ge 0}\) come from Lemma 4. Then the following hold

$$\begin{aligned}&\underline{\alpha } ( \omega (\xi ) ) \le V_1 (\xi ) \le \bar{\alpha } ( \omega (\xi ) ) \quad \forall \xi \in {\mathcal {X}} ,&\end{aligned}$$

for any \((\xi ,u) \in \mathcal {C}\) with \(\left| u\right| \le \varphi (\omega (\xi ))\)

$$\begin{aligned}&\langle \nabla V_1 (\xi ),f (\xi ,u) \rangle \le {\tilde{\gamma }}_1 (\left| u\right| ) ,&\end{aligned}$$

for any \((\xi ,u) \in \mathcal {D}\) with \(\left| u\right| \le \varphi (\omega (\xi ))\)

$$\begin{aligned}&V_1 (g(\xi ,u)) - V_1 (\xi ) \le {\tilde{\gamma }}_2 ( \left| u\right| ) .&\end{aligned}$$

4.3.7 A characterization of 0-input pre-AS

To continue with the proof, we need a dissipation characterization of 0-input pre-AS, which is stated in Proposition 2. This proposition is a unification and generalization of [4, Proposition II.5].

Proposition 2

\(\mathcal {H}\) is 0-input pre-AS if and only if there exist a smooth semi-proper function \(W :{\mathcal {X}} \rightarrow \mathbb R_{\ge 0}\), \(\lambda \in \mathcal {K}\) and a continuous function \(\rho \in \mathcal {PD}\) such that

$$\begin{aligned} \left\langle {\nabla W(\xi ),f(\xi ,u)} \right\rangle\le & {} - \rho ( \omega (\xi ) ) + \lambda ( \left| u\right| ) \quad \forall (\xi ,u) \in \mathcal {C}, \end{aligned}$$
(49)
$$\begin{aligned} W ( g(\xi ,u) ) - W ( \xi )\le & {} - \rho ( \omega (\xi ) ) + \lambda ( \left| u\right| ) \quad \lambda (\xi ,u) \in \mathcal {D}. \end{aligned}$$
(50)

Proof

Sufficiency is clear. We establish necessity. To this end, the following lemma is needed.

Lemma 6

\(\mathcal {H}\) is 0-input pAS if and only if there exist a smooth Lyapunov function \(V :\mathcal {X}\rightarrow \mathbb R_{\ge 0}\) and \(\alpha _1,\alpha _2,\alpha _3,\chi \in \mathcal {K}_\infty \) and a nonzero smooth function \(q :\mathbb R_{\ge 0}\rightarrow \mathbb R_{> 0}\) with the property that \(q(s) \equiv 1\) for all \(s \in [0,1]\) such that

$$\begin{aligned} \alpha _1 ( \omega (\xi ) ) \le V(\xi )&\le \alpha _2 ( \omega (\xi ) ) \;\quad \; \forall \,\, \xi \in {\mathcal {X}} ,&\end{aligned}$$
(51)
$$\begin{aligned} \left\langle \nabla V(\xi ),f(\xi ,q(\omega (\xi ))I\nu ) \right\rangle&\le - \alpha _3 \left( \omega ( \xi ) \right) \quad \forall (\xi ,q(\omega (\xi ))I\nu ) \in \mathcal {C}\text { with } \omega (\xi ) > \chi (\left| \nu \right| ) ,&\end{aligned}$$
(52)
$$\begin{aligned} V(g(\xi ,q(\omega (\xi ))I\nu )) - V(\xi )&\le - \alpha _3 \left( \omega (\xi ) \right) \quad \forall (\xi ,q(\omega (\xi ))I\nu ) \in \mathcal {D}\text { with } \omega (\xi ) > \chi (\left| \nu \right| ) .&\end{aligned}$$
(53)

where I is the \(m \times m\) identity matrix.

Proof

See Appendix 1. \(\square \)

Now we can pursue the proof of Proposition 2. Let \(\mathcal {H}\) be 0-input pre-AS. Recalling Lemma 6, there exists a Lyapunov function V with the properties (51)–(53). Using [26, Remark 2.4], we can show that there exists some \(\alpha _{4} \in \mathcal {K}_\infty \) such that (52) and (53) are equivalent to

$$\begin{aligned} \left\langle \nabla V(\xi ),f(\xi ,q(\omega (\xi ))I\nu ) \right\rangle\le & {} - \alpha _3 \left( \omega (\xi ) \right) + \alpha _{4} \left( \left| \nu \right| \right) \qquad \forall (\xi ,q(\omega (\xi ))I\nu ) \in \mathcal {C} , \nonumber \\\end{aligned}$$
(54)
$$\begin{aligned} V(g(\xi ,q(\omega (\xi ))I\nu )) - V(\xi )\le & {} - \alpha _3 \left( \omega (\xi ) \right) + \alpha _4 \left( \left| \nu \right| \right) \qquad \forall (\xi ,q(\omega (\xi ))I\nu ) \in \mathcal {D} . \nonumber \\ \end{aligned}$$
(55)

Given [4, Corollary IV.5], there exists \(\lambda \in \mathcal {K}\) such that \(\alpha _4 (sr) \le \lambda (s) \lambda (r)\) for all \((s,r) \in \mathbb R_{\ge 0}\times \mathbb R_{\ge 0}\). So we have

$$\begin{aligned} \left\langle \nabla V(\xi ),f(\xi ,u) \right\rangle\le & {} - \alpha _{3} (\omega (\xi )) + \lambda (1/q(\omega (\xi ))) \lambda (\left| u\right| ) \qquad \; \forall (\xi ,u) \in \mathcal {C}, \\ V(g(\xi ,u)) - V(\xi )\le & {} - \alpha _3 (\omega (\xi )) + \lambda (1/q(\omega (\xi ))) \lambda (\left| u\right| ) \;\qquad \forall (\xi ,u) \in \mathcal {D}\end{aligned}$$

where \(u := q(\omega (\xi ))I\nu \). Define \(\pi :\mathbb R_{\ge 0}\rightarrow \mathbb R_{\ge 0}\) as

$$\begin{aligned} \pi ( r ) = \int _0^r {\frac{ds}{c + \theta (s)}} \end{aligned}$$

where \(c > 0\) and \(\theta \in \mathcal {K}\) are defined below. We note that \(\pi \in \mathcal {K}\). Let \(W(r) := \pi (V(r))\) for all \(r \ge 0\). Taking the time derivative and difference of \(W(\xi )\) and recalling (54) and (55) yield

$$\begin{aligned} \left\langle {\nabla W(\xi ),f(\xi ,u)} \right\rangle&\le \frac{\left\langle {\nabla V(\xi ),f(\xi ,u)} \right\rangle }{c + \theta ( V(\xi ))} \\&\le - \frac{\alpha _3 ( \omega (\xi ) )}{c + \theta ( V(\xi ))} + \frac{\lambda ( 1/q ( \omega (\xi ) )) \lambda (\left| u\right| )}{c + \theta ( V(\xi ))} \qquad \forall (\xi ,u) \in \mathcal {C} , \\ W( g(\xi ,u)) - W(\xi )&\le \frac{V(g(\xi ,u)) - V(\xi )}{c + \theta ( V(\xi ))} \\&\le - \frac{\alpha _3 (\omega (\xi ))}{c + \theta ( V(\xi ))} + \frac{\lambda ( 1/q ( \omega (\xi ) ) ) \lambda (\left| u\right| )}{c + \theta ( V(\xi ) )} \qquad \forall (\xi ,u) \in \mathcal {D} . \end{aligned}$$

It follows from (51) that

$$\begin{aligned} \left\langle {\nabla W(\xi ),f(\xi ,u)} \right\rangle\le & {} - \frac{\alpha _3 (\omega (\xi ))}{c + \theta \circ \alpha _2 (\omega (\xi ))} + \frac{\lambda ( 1/q ( \omega (\xi )) ) \lambda (\left| u\right| )}{c + \theta \circ \alpha _1(\omega (\xi ))} \qquad \forall (\xi ,u) \in \mathcal {C}, \\ W( g(\xi ,u)) - W(\xi )\le & {} - \frac{\alpha _{3} (\omega (\xi ))}{c + \theta \circ \alpha _2 (\omega (\xi ))} + \frac{\lambda ( 1/q ( \omega (\xi ))) \lambda (\left| u\right| )}{c + \theta \circ \alpha _1 (\omega (\xi ))} \qquad \forall (\xi ,u) \in \mathcal {D}. \end{aligned}$$

Let \(c := \lambda (g(0)) = \lambda (1)\). By the fact that q is smooth everywhere and the definition of c, one can construct \(\theta \in \mathcal {K}\) such that

$$\begin{aligned} c + \theta \circ \alpha _1 (s) \ge \lambda ( 1/q (s)) \qquad \qquad \qquad s \in \mathbb {R}_{\ge 0} . \end{aligned}$$
(56)

It follows with (56) that

$$\begin{aligned} \left\langle \nabla W(\xi ),f(\xi ,u) \right\rangle\le & {} - \rho (\omega (\xi )) + \lambda (\left| u\right| ) \qquad \qquad \forall (\xi ,u) \in \mathcal {C}, \\ W( g(\xi ,u)) - W(\xi )\le & {} - \rho (\omega (\xi )) + \lambda (\left| u\right| ) \qquad \qquad \forall (\xi ,u) \in \mathcal {D}. \end{aligned}$$

where \(\rho (s) := \frac{\alpha _3 (s)}{c + \theta \circ \alpha _2 (s)}\) for all \(s \ge 0\). This proves the necessity. \(\square \)

As pre-iISS implies 0-input pre-AS, it follows from Proposition 2 that there exist a smooth semi-proper function W, \(\lambda \in \mathcal {K}\) and \(\rho \in \mathcal {PD}\) such that (49) and (50) hold. Define \(V :\mathcal {X}\rightarrow \mathbb R_{\ge 0}\) by \(V (\xi ) := W (\xi ) + V_1 (\xi )\) with \(V_1\) coming from Lemma 5. It follows from Lemma 5 and Proposition 2 that V is smooth everywhere and there exist \(\alpha _1,\alpha _2 \in \mathcal {K}_\infty \) such that

$$\begin{aligned} \alpha _1 ( \omega (\xi ) )&\le V (\xi ) \le \alpha _2 ( \omega (\xi ) ) \qquad \qquad \forall \xi \in \mathcal {X}.&\end{aligned}$$
(57)

We also have for any \((\xi ,u) \in \mathcal {C}\) with \(\left| u\right| \le \varphi (\omega (\xi ))\)

$$\begin{aligned} \langle \nabla V (\xi ),f (\xi ,u) \rangle&\le - \rho (\omega (\xi )) + \eta (\left| u\right| ),&\end{aligned}$$

and for any \((\xi ,u) \in \mathcal {D}\) with \(\left| u\right| \le \varphi (\omega (\xi ))\)

$$\begin{aligned} V (g(\xi ,u)) - V (\xi )&\le - \rho (\omega (\xi )) + \eta (\left| u\right| )&\end{aligned}$$

where \(\eta (\cdot ) := {\tilde{\gamma }} (\cdot ) + \lambda (\cdot )\) and \({\tilde{\gamma }} (\cdot ) := \max \{ {\tilde{\gamma }}_1 (\cdot ) , {\tilde{\gamma }}_2 (\cdot ) \}\). To show that V satisfies (3) and (4), let \(\chi = \varphi ^{-1}\) and define

$$\begin{aligned} \hat{\kappa } (r)&:= \max _{ \omega (\xi ) \le \chi (\left| u\right| ), \left| u\right| \le r, u \in \mathcal {U}} \big \{ \left\langle {\nabla V(\xi ) , f(\xi ,u)} \right\rangle + \rho ( \omega (\xi ) ), V (g(x,u)) - V (\xi ) \\&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~+ \rho (\omega (\xi )) \big \} . \end{aligned}$$

Then

$$\begin{aligned} \kappa (r) := \max \{ \hat{\kappa } (r) , \eta (r) \} . \end{aligned}$$

It is obvious that \(\kappa \in \mathcal {K}\). By considering two cases of \(u \in \mathcal {U}\) in which \(\left| u\right| \le \varphi (\omega (\xi ))\) and \(\left| u\right| \ge \varphi (\omega (\xi ))\), we get

$$\begin{aligned} \left\langle {\nabla V(\xi ),f(\xi ,u)} \right\rangle&\le - \rho (\omega (\xi )) + \kappa ( \left| u\right| ) \quad \forall (\xi ,u) \in \mathcal {C} ,&\\ V (g(\xi ,u)) - V (\xi )&\le - \rho (\omega (\xi )) + \kappa ( \left| u\right| ) \quad \forall (\xi ,u) \in \mathcal {D} .&\end{aligned}$$

These estimates together with (57) show that V is a smooth iISS Lyapunov function for \(\mathcal {H}\). \(\square \)

5 iISS for sampled-data systems

A popular approach to design sampled-data systems is the emulation approach. The idea is to first ignore communication constraints and design a continuous-time controller for a continuous-time plant. Then to provide certain conditions under which stability of the sampled-data control system in a certain sense is preserved in a digital implementation. The emulation approach enjoys considerable advantages in terms of the choice of continuous-time design tools. A central issue in the emulation design is the choice of the sampling period guaranteeing stability of the sampled-data system with the emulated controller. In a seminal work, Nešić et al. [17] developed an explicit formula for a maximum allowable sampling period (MASP) that ensures asymptotic stability of sampled-data nonlinear systems with emulated controllers.

Here we show the effectiveness of Theorem 1 by establishing that the MASP developed in [17] also guarantees iISS for a sampled-data control system. Consider the following plant model

$$\begin{aligned}&\begin{array}{rcl} \dot{x}_p &{}=&{} f_p (x_p,u,w) \\ y &{}=&{} g_p (x_p) \end{array} \end{aligned}$$
(58)

where \(x_p \in \mathbb R^{n_p}\) is the plant state, \(u \in \mathbb R^{n_u}\) is the control input, \(w \in \mathbb R^{n_w}\) is the disturbance input, and \(y \in \mathbb R^{n_y}\) is the plant output. Assume that \(f_p :\mathbb R^{n_p} \times \mathbb R^{n_u} \times \mathbb R^{n_w} \rightarrow \mathbb R^{n_p}\) is locally Lipschitz and \(f_p (0,0) = 0\). Since we follow the emulation method, we assume that we know a continuous-time controller, which stabilizes the origin of system (58) in the sense of iISS in the absence of network. We focus on dynamic controllers of the form

$$\begin{aligned} \begin{array}{rcl} \dot{x}_c &{} = &{} f_c (x_c,y) \\ u &{} = &{} g_c (x_c) \end{array}&\end{aligned}$$
(59)

where \(x_c \in \mathbb R^{n_c}\) is the controller state. Let \(g_c :\mathbb R^{n_c} \rightarrow \mathbb R^{n_u}\) be continuously differentiable in its argument.

We consider the scenario where the plant and the controller are connected via a digital channel. In particular, we assume that the plant is between a hold device and a sampler. Transmissions occur only at some given time instants \(t_j, j \in {\mathbb {Z}}_{> 0}\), such that \(\epsilon \le t_j-t_{j-1} \le \tau _\mathrm {MASP}\), where \(\epsilon \in (0,\tau _\mathrm {MASP}]\) represents the minimum time between any two transmission instants. Note that \(\epsilon \) can be taken arbitrarily small and it is only used to prevent Zeno behavior [10]. As in [17], a sampled-data control system with an emulated controller of the form (59) can be modeled by

$$\begin{aligned}&\begin{array}{rcll} \dot{x}_p &{}=&{} f_p (x_p,\hat{u},w) &{} t \in [t_{j-1},t_j] \\ y &{}=&{} g_p (x_p) \\ \dot{x}_c &{}=&{} f_c (x_c,\hat{y}) &{} t \in [t_{j-1},t_j] \\ u &{}=&{} g_c (x_c,\hat{y}) \\ \dot{\hat{y}} &{}=&{} \hat{f}_p (x_p,x_c,\hat{y}, \hat{u}) &{} t \in [t_{j-1},t_j] \\ \dot{\hat{u}} &{}=&{} \hat{f}_c (x_p,x_c,\hat{y}, \hat{u}) &{} t \in [t_{j-1},t_j] \\ \hat{y} (t^+_j) &{}=&{} y (t_j) \\ \hat{u} (t^+_j) &{}=&{} u (t_j) \end{array} \end{aligned}$$
(60)

where \(\hat{y} \in \mathbb R^{n_y}\) and \(\hat{u} \in \mathbb R^{n_u}\) are, respectively, the vectors of most recently transmitted plant and controller output values. These two variables are generated by the holding function \(\hat{f}_p\) and \(\hat{f}_c\) between two successive transmission instants. The use of zero-order-hold devices leads to \(\hat{f}_p = 0\) and \(\hat{f}_c = 0\) for instance. In addition, \(e := (e_y,e_u) \in \mathbb {R}^{n_e}\) denotes the sampling-induced errors where \(e_y := \hat{y} - y\in \mathbb R^{n_y}\) and \(e_u := \hat{u} - u\in \mathbb R^{n_u}\). Given \(x := (x_p,x_c) \in \mathbb R^{n_x}\), it is more convenient to transform (60) into a hybrid system as

$$\begin{aligned}&\left. \begin{array}{rcl} \dot{x} &{}=&{} f (x,e,w) \\ \dot{e} &{}=&{} g (x,e,w) \\ \dot{\tau }&{}=&{} 1 \end{array} \right\} \tau \in [0,\tau _{\mathrm {MASP}}] \end{aligned}$$
(61)
$$\begin{aligned}&\left. \begin{array}{rcl} x^+ &{}=&{} x \\ e^+ &{}=&{} 0 \\ \tau ^+ &{}=&{} 0 \end{array} \right\} \tau \in [\epsilon ,\tau _{\mathrm {MASP}}] \end{aligned}$$
(62)

where \(\tau \in \mathbb R_{\ge 0}\) represents a clock and w denotes the disturbance input. We also have the flow set \(\mathcal {C}:= \{ (x,e,\tau ,w) :\tau \in [0,\tau _{\mathrm {MASP}}]\}\) and the jump set \(\mathcal {D}:= \{ (x,e,\tau ,w) :\tau \in [\epsilon ,\tau _{\mathrm {MASP}}]\}\).

To present our results, we need to make the following assumption.

Assumption 1

There exist locally Lipschitz functions \(V :\mathbb R^{n_x} \rightarrow \mathbb R_{\ge 0}\), \(W :\mathbb {R}^{n_e} \rightarrow \mathbb R_{\ge 0}\), a continuous function \(H :\mathbb R^{n_x} \rightarrow \mathbb R_{\ge 0}\), \(\underline{\alpha }_x,\overline{\alpha }_x,\underline{\alpha }_e,\overline{\alpha }_e \in \mathcal {K}_\infty \), \({\tilde{\alpha }} \in \mathcal {PD}\), \(\sigma _1,\sigma _2 \in \mathcal {K}\) and real numbers \(L,\gamma > 0\) such that the following hold

$$\begin{aligned}&\underline{\alpha }_x (\left| x\right| ) \le V (x) \le \overline{\alpha }_x (\left| x\right| ) \qquad \forall x \in \mathbb {R}^{n_x} ,&\end{aligned}$$
(63)

for all almost \(x \in \mathbb R^{n_x}\), for all \(e \in \mathbb R^{n_e}\) and all \(w \in \mathbb R^{d}\)

$$\begin{aligned}&\langle \nabla V (x) , f(x,e,w) \rangle \le - {\tilde{\alpha }} (\left| x\right| ) - {\tilde{\alpha }} (W(e)) - [H(x)]^2 + \gamma ^2 [W(e)]^2 + \sigma _1 (\left| w\right| ) \end{aligned}$$
(64)

moreover,

$$\begin{aligned}&\underline{\alpha }_e (\left| e\right| ) \le W (e) \le \overline{\alpha }_e (\left| e\right| ) \qquad \forall e \in \mathbb R^{n_e}&\end{aligned}$$
(65)

and for almost all \(e \in \mathbb R^{n_e}\), for all \(x \in \mathbb R^{n_x}\) and all \(w \in \mathbb {R}^d\)

$$\begin{aligned}&\left\langle \frac{\partial W (e)}{\partial e} , g(x,e,w) \right\rangle \le L W (e) + H(x) + \sigma _2 (\left| w\right| ) .&\end{aligned}$$
(66)

According to (63) and (64), the emulated controller guarantees the iISS property for subsystem \(\dot{x} = f(x,e,w)\) with W and w as inputs. These properties can be verified by analysis of robustness of the closed-loop system (58)–(59) with respect to input and/or output measurement errors in the absence of digital network. Finally, sufficient conditions under which (66) holds are the function g is globally Lipschitz and there exists \(M > 0\) such that \(\left| \frac{\partial W(\kappa ,e)}{\partial e}\right| \le M\).

The last condition is on the MASP. As in [17], we need to have a system which has a sufficiently high bandwidth so that the following assumption holds.

Assumption 2

Let \(\tau _{\mathrm {MASP}}\) satisfies \(\tau _{\mathrm {MASP}} < \mathcal {T}(\gamma ,L)\) where

$$\begin{aligned} \mathcal {T}(\gamma ,L) := \left\{ \begin{array}{ll} \frac{1}{L r}\tan ^{-1}(r) &{} \gamma >L \\ \frac{1}{L} &{} L = \gamma \\ \frac{1}{L r}\tanh ^{-1}(r) &{} \gamma <L \end{array}\right. \end{aligned}$$
(67)

with \(r := \sqrt{\left| (\gamma / L)^2-1\right| }\).

Now we are ready to give the main result of this section.

Theorem 3

Let Assumptions 1 and 2 hold. Then hybrid system (61) and (62) is iISS with respect to the compact set \({\mathcal {A}} := \{ (x,e,\tau ) :x = 0 , e = 0 \}\).

Proof

To prove the theorem, we appeal to Theorem 1. In particular, we establish hybrid system (61) and (62) is smoothly dissipative. On the other hand, hybrid system (61) and (62) is also 0-input AS under Assumptions 1 and 2, as shown in [17]. Hence, by the implication \((iv) \Rightarrow (i)\) of Theorem 1, (69) is iISS. Toward the dissipative property of (61) and (62), the following two lemmas are required to give the proof.

Lemma 7

Given \(c > 1\) and \(\lambda \in (0,1)\), define

$$\begin{aligned} \tilde{\mathcal {T}} (c,\lambda ,L,\gamma ) := \left\{ \begin{array}{ll} \frac{1}{L r}\tan ^{-1}\left( \frac{r(1-\lambda )}{2\left( \frac{\lambda }{\lambda +1}\right) \left( \frac{\gamma }{L}\left( \frac{c+1}{2}\right) -1\right) +1+\lambda }\right) &{} L < \gamma \sqrt{c} \\ \frac{1}{L} \left( \frac{1-\lambda ^2}{\lambda ^2 + \frac{\gamma }{L}(1+c)\lambda +1}\right) &{} L = \gamma \sqrt{c} \\ \frac{1}{L r}\tanh ^{-1}\left( \frac{r(1-\lambda )}{2\left( \frac{\lambda }{\lambda +1}\right) \left( \frac{\gamma }{L}\left( \frac{c+1}{2}\right) -1\right) +1+\lambda }\right) &{} L > \gamma \sqrt{c} \end{array}\right. \end{aligned}$$

where \(r := \sqrt{\left| (\gamma / L)^2-c\right| }\). Let \(\phi :[0,\tilde{\mathcal {T}}] \rightarrow \mathbb {R}\) be the solution to

$$\begin{aligned} \dot{\phi }= -2 L \phi - \gamma (\phi ^2+c) \qquad \phi (0) = \lambda ^{-1}. \end{aligned}$$
(68)

Then \(\phi (\tau ) \in [\lambda ,\lambda ^{-1}]\) for all \(\tau \in [0,\tilde{\mathcal {T}}]\).

Lemma 8

For any fixed \(\gamma \) and L, \(\tilde{\mathcal {T}}(\cdot ,\cdot ,\gamma ,L) : (1,+\infty ) \times (0,1) \rightarrow \mathbb R_{> 0}\) is continuous and strictly decreasing to zero with respect to the first two arguments.

Let \(\tau _{\mathrm {MASP}} < \mathcal {T} (\gamma ,L)\) be given. For the sake of convenience, denote \(\xi := [x^\top ,e^\top ,\tau ]^\top \), \(F(\xi ,w) := [f(x,e,w)^\top ,g(x,e,w)^\top ,1]^\top \) and \(G(\xi ,w) := [x^\top ,0^\top ,0]^\top \). Also, rewrite hybrid system (61) and (62) as

$$\begin{aligned}&\mathcal {H} := \left\{ \begin{array}{l} \dot{\xi } = F (\xi ,w) \;\; \quad (\xi ,w) \in \mathcal {C} \\ \xi ^{+} = G (\xi ,w) \quad (\xi ,w) \in \mathcal {D} \end{array} \right. .&\end{aligned}$$
(69)

It follows from Lemma 8 that there exist \(c > 1\) and \(\lambda \in (0,1)\) such that \(\tau _{\mathrm {MASP}} = \tilde{\mathcal {T}} (c,\lambda ,\gamma ,L)\). Let the quadruple \((c,\lambda ,\gamma ,L)\) generate \(\phi \) via Lemma 7. Also, let

$$\begin{aligned} U (\xi ) := V(x) + \gamma \phi (\tau ) [W(e)]^2 . \end{aligned}$$

By (63), (65) and the fact that \(\phi (\tau ) \in [\lambda ,\lambda ^{-1}]\) for all \(\tau \in [0,\tau _{\mathrm {MASP}}]\) (cf. Lemma 7), there exist \(\underline{\alpha },\overline{\alpha }\in \mathcal {K}_\infty \) such that the following hold

$$\begin{aligned}&\underline{\alpha }(\left| [x,e]\right| ) \le U (\xi ) \le \overline{\alpha }(\left| [x,e]\right| ) .&\end{aligned}$$
(70)

For any \((\xi ,w) \in \mathcal {C}\), we have

$$\begin{aligned} \left\langle \nabla U (\xi ),F(\xi ,w)\right\rangle&= \left\langle V(x), f(x,e,w) \right\rangle + 2 \gamma \phi (\tau ) W(e) \left\langle \frac{\partial W}{\partial e} , g(x,e,w) \right\rangle \\&\quad + \gamma \dot{\phi }(\tau ) [W(e)]^2 . \end{aligned}$$

It follows from (64), (66) and (68) that

$$\begin{aligned} \left\langle \nabla U (\xi ),F(\xi ,w)\right\rangle \le&- {\tilde{\alpha }} (\left| x\right| ) - {\tilde{\alpha }} (W(e)) - [H(x)]^2 + \gamma ^2 [W(e)]^2 + \sigma _1 (\left| w\right| ) \\&\quad + 2 \gamma \phi (\tau ) W(e) [L W (e) + H(x) + \sigma _2 (\left| w\right| )]\\&\quad - \gamma [2 L \phi + \gamma (\phi ^2+c)] [W(e)]^2 \\&= - {\tilde{\alpha }} (\left| x\right| ) - {\tilde{\alpha }} (W(e)) - [\gamma \phi (\tau ) W(e) - H(x)]^2 \\&\quad - (c-1)\gamma ^2 [W (e)]^2 \\&\quad + \sigma _1 (\left| w\right| ) + 2 \gamma \phi (\tau ) W(e)\sigma _2 (\left| w\right| ) \\&\le - {\tilde{\alpha }} (\left| x\right| ) - {\tilde{\alpha }} (W(e)) - (c-1)\gamma ^2 [W (e)]^2 + \sigma _1 (\left| w\right| ) \\&\quad + 2 \gamma \phi (\tau ) W(e)\sigma _2 (\left| w\right| ) . \end{aligned}$$

From Young’s inequality, for any \(\varepsilon > 0\) we have

$$\begin{aligned} \left\langle \nabla U (\xi ),F(\xi ,w)\right\rangle \le&- {\tilde{\alpha }} (\left| x\right| ) - {\tilde{\alpha }} (W(e)) - (c-1)\gamma ^2 [W (e)]^2 + \sigma _1 (\left| w\right| )&\\&+ \varepsilon \gamma ^2 [W(e)]^2+ \frac{[\phi (\tau )]^2}{\varepsilon } [\sigma _2 (\left| w\right| )]^2 .&\end{aligned}$$

It follows from Lemma 7 that

$$\begin{aligned} \left\langle \nabla U (\xi ),F(\xi ,w) \right\rangle \le&- {\tilde{\alpha }} (\left| x\right| ) - {\tilde{\alpha }} (W(e)) - (c-\varepsilon -1) \gamma ^2 [W (e)]^2 + \sigma _1 (\left| w\right| ) \\&+ \frac{1}{\lambda ^2\varepsilon } [\sigma _2 (\left| w\right| )]^2 .&\end{aligned}$$

Given \(\sigma (\cdot ) := \sigma _1 (\cdot ) + \frac{1}{\lambda ^2\varepsilon } [\sigma _2 (\cdot )]^2\), we get

$$\begin{aligned} \left\langle \nabla U (\xi ),F(\xi ,w) \right\rangle \le&- {\tilde{\alpha }} (\left| x\right| ) - {\tilde{\alpha }} (W(e)) - (c-\varepsilon -1) \gamma ^2 [W (e)]^2 + \sigma (\left| w\right| )&\end{aligned}$$

Picking \(\varepsilon \) sufficiently small such that \(c-\varepsilon -1 > 0\) gives

$$\begin{aligned} \left\langle \nabla U (\xi ),F(\xi ,w) \right\rangle \le&- {\tilde{\alpha }} (\left| x\right| ) - {\tilde{\alpha }} (W(e)) + \sigma (\left| w\right| ) .&\end{aligned}$$

Then

$$\begin{aligned}&\left\langle \nabla U (\xi ),F(\xi ,w) \right\rangle \le \sigma (\left| w\right| ) .&\end{aligned}$$
(71)

Also, for any \((\xi ,w) \in \mathcal {D}\), we have

$$\begin{aligned}&U (\xi ^+) = V(x^+) + \gamma \phi (\tau ^+) [W(e^+)]^2 .&\end{aligned}$$

It follows from (69) that

$$\begin{aligned} U (\xi ^+) = V(x) + \gamma \phi (0) [W(0)]^2 .&\end{aligned}$$

By the fact that \(W(0) = 0\), we get

$$\begin{aligned}&U (\xi ^+) \le V(x) \le U(\xi ).&\end{aligned}$$

Thus

$$\begin{aligned} U (\xi ^+) - U(\xi ) \le 0.&\end{aligned}$$
(72)

for all \((\xi ,w) \in \mathcal {D}\). Given (70), (71) and (72), we conclude that (69) is smoothly dissipative with \(\rho (\xi ) \equiv 0\) as in (7) and (8). \(\square \)

Remark 5

Variants of Theorem 3 including a (semiglobal) practical iISS property can be obtained by appropriate modifications to Assumption 1. Moreover, motivated by the connections between other engineering systems such as networked control systems and event-triggered control systems with sampled-data systems, we foresee that the application of our results to sampled-data systems can be useful for the study of the iISS property for such hybrid systems.

To verify the effectiveness of Theorem 3, we give an illustrative example. Consider the continuous-time plant with a bounded-input controller

$$\begin{aligned}&\dot{x} = \sin (x) + u + w&\\&u = - \frac{x}{1+x^2} - \sin (x)&\end{aligned}$$

where \(x,u,w \in \mathbb R\). Ignoring the digital channel, the closed-loop system is not ISS but iISS. Given the digital communication effects, we write the system into a hybrid system the same as (61) and (62)

$$\begin{aligned}&\begin{array}{rcll} \dot{x} &{}=&{} - \frac{x+e_x}{1+(x+e_x)^2} + \sin (x) - \sin (x+e_x) + e_u + w &{} \quad t \in [t_{j-1},t_j] \\ \dot{e}_u &{}=&{} 0 &{} \quad t \in [t_{j-1},t_j] \\ \dot{e}_x &{}=&{} \frac{x+e_x}{1+(x+e_x)^2} + \sin (x+e_x) - e_u - w &{} \quad t \in [t_{j-1},t_j] \\ e_u (t^+_j) &{}=&{} 0 \\ e_x (t^+_j) &{}=&{} 0 . \end{array} \end{aligned}$$

Taking \(V(x) = \left| x\right| ,W(e)=\left| e\right| \), we have that the requirements in Assumption 1 are satisfied with \(L = 3, \gamma = 10\) and \(H(x) = \frac{\left| x\right| }{1+x^2}\). The choice of parameters gives \(\tau _{\mathrm {MASP}} \simeq 0.13\).

6 Conclusions

This paper was primarily concerned with Lyapunov characterizations of pre-iISS for hybrid systems. In particular, we established that the existence of a smooth iISS Lyapunov function is equivalent to pre-iISS which unified and extended results in [1, 4]. We also related pre-iISS to dissipativity and detectability notions. Robustness of pre-iISS to vanishing perturbations was investigated, as well. We finally illustrated the effectiveness of our results by providing a maximum allowable sampling period guaranteeing iISS for sampled-data control systems.

Our results can be extended in several directions. In particular, further potential equivalent characterizations of pre-iISS in terms of time-domain behaviors including 0-input pre-AS plus uniform-bounded-energy-bounded-state as well as bounded energy weakly converging state plus 0-input pre-local stability (cf. [2, 5] for the existing equivalent characterizations for continuous-time systems). Moreover, other related notions such as strong iISS, integral input-output-to-state stability and integral output-to-state stability could be investigated.