In this chapter we will be primarily interested in the linear impulsive RFDE

$$\displaystyle \begin{aligned}\dot x&=L(t)x_t + h(t),&t&\neq t_k \end{aligned} $$
(I.2.1)
$$\displaystyle \begin{aligned}\varDelta x&=B(k) x_{t^-}+r_k,&t&=t_k. \end{aligned} $$
(I.2.2)

The following assumptions will be needed throughout:

  1. H.1

    The representation

    $$\displaystyle \begin{aligned} \displaystyle L(t)\phi = \int_{-r}^0 [d_\theta \eta(t,\theta)]\phi(\theta) \end{aligned} $$

    holds, where the integral is taken in the Lebesgue–Stieltjes sense, the function \(\eta :\mathbb {R}\times [-r,0]\rightarrow \mathbb {R}^{n\times n}\) is jointly measurable and is of bounded variation and right-continuous on [−r, 0] for each \(t\in \mathbb {R}\), such that |L(t)ϕ|≤ (t)||ϕ|| for some \(\ell :\mathbb {R}\rightarrow \mathbb {R}\) locally integrable.

  2. H.2

    The sequence t k is monotonically increasing with |t k|→ as |k|→, and the representation

    $$\displaystyle \begin{aligned} B(k)\phi = \int_{-r}^0 [d_\theta \gamma_k(\theta)]\phi(\theta) \end{aligned}$$

    holds for \(k\in \mathbb {Z}\) for functions \(\gamma _k:[-r,0]\rightarrow \mathbb {R}^{n\times n}\) of bounded variation and right-continuous, such that |B(k)|≤ b(k).

FormalPara Remark I.2.0.1

Assumption H.1 includes the case of discrete time-varying delays: for example, the linear differential-difference equation

$$\displaystyle \begin{aligned} \dot x = \sum_{k=1}^m A_k(t)x(t-r_k(t)) \end{aligned}$$

with r k continuous, is associated with a linear operator satisfying condition H.1 with \(\eta (t,\theta )=\sum A_k(t)H_{-r_k(t)}(\theta )\), where H z(θ) = 1 if θ ≥ z and zero otherwise. It also obviously includes a large class of distributed delays, such as those appearing in the differential equation

$$\displaystyle \begin{aligned} \dot x = \int_{-\tau}^0 K(t,\theta)x(t+\theta)d\theta. \end{aligned}$$

Similar results apply for the jump function B(k) and assumption H.2. Moreover, each of L(t) and B(k) is well-defined on \(\mathcal {R}\mathcal {C}\mathcal {R}\); see Theorem 2.23 from Chapter 3 of [66].

1 Existence and Uniqueness of Solutions

Definition I.2.1.1

Let \((s,\phi )\in \mathbb {R}\times \mathcal {R}\mathcal {C}\mathcal {R}\). A function \(x\in \mathcal {R}\mathcal {C}\mathcal {R}([s-r,\alpha ),\mathbb {R}^n)\) for some α > s is an integrated solution of the linear impulsive RFDE (I.2.1)–(I.2.2) satisfying the initial condition (s, ϕ) if it satisfies x s = ϕ and the integral equation

$$\displaystyle \begin{aligned} x(t)&=\left\{\begin{array}{lc}\phi(0) + \int_s^t [L(\mu)x_\mu + h(\mu)]d\mu + \sum_{s< t_i\leq t}[B(i) x_{ t_i^-}+r_i],&t>s\\ \phi(t-s),&s-r\leq t\leq s. \end{array}\right. \end{aligned} $$
(I.2.3)

Lemma I.2.1.1

Let \(h\in \mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},\mathbb {R}^n)\) , let \(\{r_k:k\in \mathbb {Z}\}\subset \mathbb {R}^n\) and let hypotheses H.1–H.2 hold. For all \((s,\phi )\in \mathcal {R}\mathcal {C}\mathcal {R}\) , there exists a unique integrated solution \(x\in \mathcal {R}\mathcal {C}\mathcal {R}([s-r,\infty ),\mathbb {R}^n)\) of (I.2.1)(I.2.2) satisfying the initial condition (s, ϕ).

The above lemma follows by hypotheses H.1–2, the Banach fixed-point theorem, Lemma I.1.5.1 and typical continuation arguments. Note that h may be unbounded on the real line, but since it is regulated we are guaranteed its boundedness on every compact set—see Honig [69]. Any classical solution (in the sense of Ballinger and Liu [13]) is an integrated solution, so the definition is indeed appropriate. We will drop the adjective integrated from this point onwards, since this class of solutions will be used exclusively from this point on.

On the note of “classical” solutions, it will later be important that the impulsive RFDE (I.2.1)–(I.2.2) has a regularizing effect on initial conditions. Precisely, we have the following lemma.

Lemma I.2.1.2

Under the conditions of Lemma I.2.1.1 , the integrated solution \(x:[s-r,\infty )\rightarrow \mathbb {R}^n\) is differentiable from the right on [s, ). In particular, if \(x:\mathbb {R}\rightarrow \mathbb {R}^n\) is a solution defined for all time, then \(x\in \mathcal {R}\mathcal {C}\mathcal {R}^1(\mathbb {R},\mathbb {R}^n)\).

Proof

The first conclusion follows by the integral representation of solutions with the remark that \(\mu \mapsto L(\mu )x_\mu \in \mathcal {R}\mathcal {C}\mathcal {R}([s,\infty ),\mathbb {R}^n)\). For the second part, one can show that the restriction of x to any interval of the form [s, ) is differentiable from the right by applying the previous result to the restriction on [s − r, ). Since s is arbitrary, the result is proven. □

2 Evolution Families

In this section we will specialize to the equation

$$\displaystyle \begin{aligned}\dot x&=L(t)x_t ,&t&\neq t_k \end{aligned} $$
(I.2.4)
$$\displaystyle \begin{aligned}\varDelta x&=B(k) x_{t^-},&t&= t_k. \end{aligned} $$
(I.2.5)

Definition I.2.2.1

Let hypotheses H.1–H.2 hold. For a given \((s,\phi )\in \mathbb {R}\times \mathcal {R}\mathcal {C}\mathcal {R}\), let tx(t;s, ϕ) denote the unique solution of (I.2.4)–(I.2.5) satisfying x s(⋅;s, ϕ) = ϕ. The function \(U(t,s):\mathcal {R}\mathcal {C}\mathcal {R}\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) defined by U(t, s)ϕ = x t(⋅, s, ϕ) for t ≥ s is the evolution family associated with the homogeneous equation (I.2.4)–(I.2.5).

From here onwards, we will take the convention that if \(L:\mathcal {R}\mathcal {C}\mathcal {R}\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) is a linear operator, then (θ) for \(\phi \in \mathcal {R}\mathcal {C}\mathcal {R}\) and θ ∈ [−r, 0] should be understood as [L(ϕ)](θ). Also, the symbol I X will refer to the identity operator on X. When the context is clear, we will simply write it as I. Introduce the linear function \(\chi _s:\mathbb {R}^n\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) defined by

$$\displaystyle \begin{aligned}{}[\chi_s\xi](\theta)&=\left\{\begin{array}{ll}\xi, & \theta = s \\ 0, & \theta\neq s. \end{array}\right. \end{aligned} $$
(I.2.6)

Lemma I.2.2.1

The evolution family satisfies the following properties:

  1. 1)

    U(t, t) = I for all \(t\in \mathbb {R}\).

  2. 2)

    For s  t, \(U(t,s):\mathcal {R}\mathcal {C}\mathcal {R}\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) is a bounded linear operator. In particular,

    $$\displaystyle \begin{aligned}||U(t,s)||\leq\exp\left(\int_s^t \ell(\mu)d\mu\right)\prod_{s<t_i\leq t}(1+b(i)). \end{aligned} $$
    (I.2.7)
  3. 3)

    For s  v  t, U(t, s) = U(t, v)U(v, s).

  4. 4)

    For all θ ∈ [−r, 0], s  t + θ and \(\phi \in \mathcal {R}\mathcal {C}\mathcal {R}\) , U(t, s)ϕ(θ) = U(t + θ, s)ϕ(0).

  5. 5)

    For all t k > s, one has \(U(t_k,s)=(I+\chi _0 B(k))U(t_k^-,s)\).Footnote 1

  6. 6)

    Let C(t, s) denote the evolution family on \(\mathcal {R}\mathcal {C}\mathcal {R}\) associated with the “continuous” equation \(\dot x=L(t)x_t\) . The following factorization holds:

    $$\displaystyle \begin{aligned} U(t,s)=\left\{\begin{array}{ll} C(t,s),&[s,t]\cap\{t_k\}_{k\in\mathbb{Z}}\in\{\{s\},\emptyset\}\\ C(t,t_k)\circ(I+\chi_0 B(k))\circ U(t_k^-,s),&t\geq t_k>s.\\ \end{array}\right. \end{aligned} $$
    (I.2.8)

Proof

Properties (1), (3) and (4) are straightforward, given the uniqueness assertion of Lemma I.2.1.1 and the definition of the evolution family. Property (6) follows similarly once we can establish (5). To obtain boundedness, we use the integral equation (I.2.3) to get the estimate

$$\displaystyle \begin{aligned} |U(t,s)\phi(\theta)| &\leq ||\phi||+\int_s^{t+\theta}|L(\mu)U(\mu,s)\phi| d\mu + \sum_{s<t_i\leq t+\theta}|B(i)U(t_i^-,s)\phi|\\ &\leq ||\phi|| + \int_s^t \ell(\mu)||U(\mu,s)\phi||d\mu + \sum_{s<t_i\leq t} b(i)||U(t_i^-,s)\phi||. \end{aligned} $$

Since the upper bounds are independent of θ, denoting X(t) = U(t, s)ϕ, we obtain

$$\displaystyle \begin{aligned} ||X(t)||\leq ||\phi|| + \int_s^t \ell(\mu)||X(\mu)||d\mu + \sum_{s<t_i\leq t}b(i)||X(t_i^-)||. \end{aligned}$$

By Lemma I.1.3.6, t↦||X(t)|| is an element of \(\mathcal {R}\mathcal {C}\mathcal {R}([s-r,\infty ),\mathbb {R})\). Using the Gronwall inequality of Lemma I.1.5.1, we obtain the desired boundedness (I.2.7). As for property (5),

$$\displaystyle \begin{aligned} U(t_k,s)\phi(0)&=\phi(0)+\int_s^{t_k}L(\mu)U(\mu,s)\phi d\mu + \sum_{s<t_i\leq t_k}B(i)U(t_i^-,s)\phi\\ &=U(t_k^-,s)\phi(0) + B(k) U(t_k^-,s)\phi \end{aligned} $$

and \(U(t_k^-,s)\phi (\theta )=U(t_k,s)\phi (\theta )\) for θ < 0. □

The connection between the evolution family and processes is given by the following lemma, whose proof now follows directly from Lemma I.2.2.1.

Lemma I.2.2.2

Let \(\mathcal {M}\) be the nonautonomous set over \(\mathbb {R}\times \mathcal {R}\mathcal {C}\mathcal {R}\) with t-fibre

$$\displaystyle \begin{aligned} \mathcal{M}(t)=\bigcup_{s\leq t}\{s\}\times\mathcal{R}\mathcal{C}\mathcal{R}, \end{aligned} $$

and define S(t, (s, ϕ)) = x(t;s, ϕ). \((S,\mathcal {M})\) is a forward, linear process, and \(U(t,s):\mathcal {R}\mathcal {C}\mathcal {R}\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) is its two-parameter semigroup.

2.1 Phase Space Decomposition

In the analysis of steady states of linear ordinary differential equations, the stable, centre and unstable subspaces play a key role. The appropriate generalization to impulsive functional differential equations is spectral separation of the evolution family. That is, the evolution family \(U(t,s):\mathcal {R}\mathcal {C}\mathcal {R}\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) is spectrally separated if it satisfies the properties of Definition I.1.1.6.

If the evolution family \(U(t,s):\mathcal {R}\mathcal {C}\mathcal {R}\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) is spectrally separated, the phase space admits a direct sum decomposition

$$\displaystyle \begin{aligned} \mathcal{R}\mathcal{C}\mathcal{R} = \mathcal{R}\mathcal{C}\mathcal{R}_s(t)\oplus\mathcal{R}\mathcal{C}\mathcal{R}_c(t)\oplus\mathcal{R}\mathcal{C}\mathcal{R}_u(t) \end{aligned} $$
(I.2.9)

for each \(t\in \mathbb {R}\). If \((s,\phi )\in \mathcal {R}\mathcal {C}\mathcal {R}_s\), Eq. (I.1.11) implies that U(t, s)ϕ decays to zero exponentially as t →. We say that in the stable fibre bundle, solutions decay exponentially in forward time. Similarly, in the unstable fibre bundle, solutions are defined for all time and decay exponentially in backward time. In the centre fibre bundle, solutions are defined for all time and grow slower than exponentially in forward and backward times. The difference between this decomposition and one more typical of autonomous or ordinary delay differential equations is that the factors of the decomposition are generally time-dependent; that is, they are determined by the t-fibres of the invariant fibre bundles \(\mathcal {R}\mathcal {C}\mathcal {R}_s\), \(\mathcal {R}\mathcal {C}\mathcal {R}_c\) and \(\mathcal {R}\mathcal {C}\mathcal {R}_u\).

2.2 Evolution Families are (Generally) Nowhere Continuous

The use of the phase space \(\mathcal {R}\mathcal {C}\mathcal {R}\) causes the evolution family \(U(t,s):\mathcal {R}\mathcal {C}\mathcal {R}\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) to have some undesirable regularity properties. To illustrate this, consider the trivial impulsive functional differential equation

$$\displaystyle \begin{aligned} \dot x&=0,&t&\neq t_k\\ \varDelta x&=0,&t&=t_k. \end{aligned} $$

The evolution family associated with the above homogeneous equation is equivalent to a one-parameter semigroup; U(t, s) = V (t − s), where for ξ ≥ 0,

$$\displaystyle \begin{aligned} V(\xi)\phi(\theta)=\left\{\begin{array}{ll}\phi(\xi+\theta),&\xi+\theta\leq 0 \\ \phi(0),&\xi+\theta>0. \end{array}\right. \end{aligned} $$

Suppose \(\phi \in \mathcal {R}\mathcal {C}\mathcal {R}\) has an internal discontinuity at some d ∈ (−r, 0). Then, for 𝜖 > 0 sufficiently small and any \(0\leq t<\min \{|d|,d+r\}\), one has

$$\displaystyle \begin{aligned} ||V(t-\epsilon)\phi- V(t)\phi||&\geq |V(t-\epsilon)\phi(d-t)-V(t)\phi(d-t)|\\ &=|\phi(d-\epsilon)-\phi(d)|, \end{aligned} $$

which because of the discontinuity is guaranteed to be bounded away from zero for 𝜖 arbitrarily small. As such, tV (t)ϕ is nowhere continuous from the left in \([0,\min \{|d|,d+r\})\). On the other hand, we also have

$$\displaystyle \begin{aligned} ||V(t+\epsilon)\phi-V(t)\phi||&\geq|V(t+\epsilon)\phi(d-t-\epsilon)-V(t)\phi(d-t-\epsilon)|\\ &=|\phi(d)-\phi(d-\epsilon)|, \end{aligned} $$

which is again bounded away from zero. We conclude that tV (t)ϕ is nowhere continuous from the right on the interval \([0,\min \{|d|,d+r\})\). As a consequence, neither tU(t, s)ϕ nor sU(t, s)ϕ can generally be relied on to have any points of continuity from either side.

This lack of continuity continues to be a problem for arbitrary evolution families \(U(t,s):\mathcal {R}\mathcal {C}\mathcal {R}\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\). Indeed, suppose \(U(t,s)\phi :[-r,0]\rightarrow \mathbb {R}^n\) has an internal discontinuity at some d ∈ (−r, 0)—this could result from an impulse effect at some time t k ∈ (t − r, t). Because of the translation property (4) of Lemma I.2.2.1, we have for any t′≥ t such that \(0\leq t'-t <\min \{|d|,d+r\}\)

$$\displaystyle \begin{aligned} ||U(t'+\epsilon,s)\phi-U(t',s)\phi||&\geq|U(t'+\epsilon,s)\phi(d+t-t'-\epsilon)-U(t',s)\\ &\qquad \qquad \qquad \qquad \qquad \phi(d+t-t'-\epsilon)|\\ &=|U(t,s)\phi(d)-U(t,s)\phi(d-\epsilon)|, \end{aligned} $$

which is yet again bounded away from zero for 𝜖 arbitrarily small. In the same way as before, we conclude that t′U(t′, s)ϕ is nowhere continuous from the left or right, for \(t'\in [t,t+\min \{|d|,d+r\})\).

2.3 Continuity under the L 2 Seminorm

While tU(t, s)ϕ is generally discontinuous everywhere with respect to the uniform norm ||g|| =supθ ∈ [−r,0]|g(θ)| that we have been using up until this point, the same is not true if one uses the L 2 norm. Indeed, for 0 < 𝜖 < 𝜖 0 < r and a fixed 𝜖 0, one can make the estimate

$$\displaystyle \begin{aligned} \int_{-r}^0|U(t+\epsilon,s)\phi(\theta){-}U(t,s)\phi(\theta)|{}^2d\theta&\leq\int_{-r}^{-\epsilon} |U(t,s)\phi(\theta{+}\epsilon){-}U(t,s)\phi(\theta)|{}^2 d\theta {+} \epsilon K \end{aligned} $$

where K is some constant such that ||U(t + 𝜖, s)ϕ − U(t, s)ϕ||2 ≤ K for 𝜖 < 𝜖 0. The integrand converges pointwise to zero almost everywhere and is uniformly bounded, so the dominated convergence theorem implies that U(t + 𝜖, s)ϕ → U(t, s)ϕ in the L 2 sense, with respect to the norm

$$\displaystyle \begin{aligned} ||g||{}_2=\left(\int_{-r}^0 |g(\theta)|{}^2 d\theta\right)^{1/2}. \end{aligned}$$

Consequently, tU(t, s)ϕ is continuous for each \(\phi \in \mathcal {R}\mathcal {C}\mathcal {R}\) with respect to the ||⋅||2 norm. However,

$$\displaystyle \begin{aligned} U(t,s):(\mathcal{R}\mathcal{C}\mathcal{R},||\cdot||{}_2)\rightarrow (\mathcal{R}\mathcal{C}\mathcal{R},||\cdot||{}_2) \end{aligned}$$

is not bounded. To see this, let us again make use of our translationFootnote 2 semigroup \(V(t):\mathcal {R}\mathcal {C}\mathcal {R}\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) from Sect. I.2.2.2. Assume there exists K ≥ 0 such that ||U(t, s)ϕ||2 ≤ K||ϕ||2 for all \(\phi \in \mathcal {R}\mathcal {C}\mathcal {R}\). If t ≥ s + r, this implies the equation

$$\displaystyle \begin{aligned} r|\phi(0)|\leq K\left(\int_{-r}^0|\phi(\theta)|{}^2 d\theta\right)^{\frac{1}{2}}, \end{aligned} $$

which cannot hold for all \(\phi \in \mathcal {R}\mathcal {C}\mathcal {R}\). As such, even though \(\mathcal {R}\mathcal {C}\mathcal {R}\) is dense in \(L^2([-r,0],\mathbb {R}^n)\), we cannot extend U(t, s) to a bounded linear operator on L 2 and take advantage of the continuity of tU(t, s)ϕ or the completeness of \(\mathcal {L}^2\).

3 Representation of Solutions of the Inhomogeneous Equation

Given the evolution family \(U(t,s):\mathcal {R}\mathcal {C}\mathcal {R}\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) associated with the homogeneous equation (I.2.4)–(I.2.5), we now consider to what extent the solutions of the inhomogeneous equation (I.2.1)–(I.2.2) can be represented in the form of a variation-of-constants formula. It is worth revisiting the variation-of-constants formula of Hale [56] for the functional differential equation

$$\displaystyle \begin{aligned} \dot x=L(t)x_t + h(t). \end{aligned}$$

The formula states that the solution \(x:[s-r,\infty )\rightarrow \mathbb {R}^n\) satisfying the initial condition x s = ϕ can be written in the form

$$\displaystyle \begin{aligned} x_t&=X(t,s)\phi + \int_s^t X(t,\mu)\chi_0 h(\mu)d\mu{} \end{aligned} $$
(I.2.10)

for all t ≥ s, where \(X(t,s):C([-r,0],\mathbb {R}^n)\rightarrow C([-r,0],\mathbb {R}^n)\) is the evolution family associated with the homogeneous equation \(\dot x=L(t)x_t\), and the integral is understood as one parameterized by the lag θ ∈ [−r, 0]—that is, for each θ ∈ [−r, 0], one interprets the integral on the right-hand side to be the integral of \(\mu \mapsto X(t,\mu )[\chi _0 h(\mu )](\theta )\in \mathbb {R}^n\) over [s, t]. As stated, the formula is technically incorrect because χ 0 h(μ) is not in the domain \(C([-r,0],\mathbb {R}^n)\) of X(t, μ). In this section we will prove an analogous formula for the inhomogeneous linear system (I.2.1)–(I.2.2), but this technical difficulty will be resolved by working with the phase space \(\mathcal {R}\mathcal {C}\mathcal {R}\) at the outset. We will also interpret the integral in the weak sense. See the comments (Sect. I.2.5) for further discussion. The content of this section follows closely the presentation of Church and Liu [31].

3.1 Pointwise Variation-of-Constants Formula

The first task is to decompose solutions of the inhomogeneous equation by means of superposition. Specifically, we write them as the sums of homogeneous solutions and a pair of inhomogeneous solutions with different inhomogeneities corresponding to the continuous forcing h(t) and the impulsive forcing r k. The result follows directly from Lemma I.2.1.1.

Lemma I.2.3.1

Let \(h\in \mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},\mathbb {R}^n)\) , and let hypotheses H.1–H.2 hold. Denote by tx(t;s, ϕ;h, r) the solution of the linear inhomogeneous equation (I.2.1)(I.2.2) for inhomogeneities h = h(t) and r = r k , satisfying the initial condition x s(⋅;s, ϕ;h, r) = ϕ. The following decomposition is valid:

$$\displaystyle \begin{aligned} x(t;s,\phi;h,r)=x(t;s,\phi;0,0)+x(t;s,0;h,0)+x(t;s,0;0,r) \end{aligned} $$
(I.2.11)

The following lemmas prove representations of the inhomogeneous impulsive and continuous terms x t(⋅;s, 0;0, r) and x t(⋅;s, 0;h, 0), respectively.

Lemma I.2.3.2

Under hypotheses H.1–H.2, one has

$$\displaystyle \begin{aligned} x_t(\cdot;s,0;0,r)=\sum_{s< t_i\leq t}U(t, t_i)\chi_0 r_i \end{aligned} $$
(I.2.12)

Proof

Denote x(t) = x(t;s, 0;0, r). Clearly, for \(t\in [s,\min \{ t_i: t_i>s\}\), one has x t = 0. We may assume without loss of generality that\( t_0=\min \{ t_i: t_i>s\}\). Then, \(x_{ t_0}=\chi _0 r_0\) due to (I.2.3). From Lemmas I.2.1.1 and I.2.2.1, we have x t = U(t, t 0)χ 0 r 0 for all t ∈ [t 0, t 1), so (I.2.12) holds for all t ∈ [s, t 1). Supposing by induction that \(x_t=\sum _{s< t_i\leq t}U(t, t_i)\chi _0 r_i\) for all t ∈ [s, t k) for some k ≥ 1, we have

$$\displaystyle \begin{aligned} x_{ t_k}&=x_{ t_k^-}+\chi_0 B(k) x_{ t_k^-} + \chi_0 r_k\\ &=U( t_k, t_{k-1})x_{ t_{k-1}} + \chi_0 r_k\\ &=\textstyle U( t_k, t_{k-1})\sum_{s< t_i\leq t_{k-1}}U( t_{k-1}, t_i)\chi_0 r_i + \chi_0 r_k\\ &=\textstyle\sum_{s< t_i\leq t_k}U(t, t_i)\chi_0 r_i. \end{aligned} $$

Equality (I.2.12) then holds for t ∈ [t k, t k+1) by applying Lemma I.2.2.1. □

Lemma I.2.3.3

Let \(h\in \mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},\mathbb {R}^n)\) . Under hypotheses H.1–H.2, one has

$$\displaystyle \begin{aligned} x_t(\theta;s,0;h,0)=\int_s^t U(t,\mu)[\chi_0 h(\mu)](\theta)d\mu, \end{aligned} $$
(I.2.13)

where the integral is defined for each θ as the integral of the function μU(t, μ)[χ 0 h(μ)](θ) in \(\mathbb {R}^n\).

Proof

We provide a brief sketch of the proof. The interested reader can consult Church and Liu [31] for details. Denote x(t;s)h = x(t;s, 0;h, 0). The function \(x(t,s):\mathcal {R}\mathcal {C}\mathcal {R}([s,t],\mathbb {R}^n)\rightarrow \mathbb {R}^n\) is linear for each fixed s ≤ t and extends uniquely to a linear functional \(\tilde x(t,s):\mathcal {L}_1([s,t],\mathbb {R}^n)\rightarrow \mathbb {R}^n\). One can show that it is also bounded, so there exists an integrable, essentially bounded n × n matrix-valued function μV (t, s, μ) such that

$$\displaystyle \begin{aligned} \tilde x(t,s)h=\int_s^t V(t,s,\mu)h(\mu)d\mu. \end{aligned} $$
(I.2.14)

One can then show that V (t, s, μ) is independent of s. Define V (t, s) = V (t, ⋅, s) for any t ≥ s and V (t, s) = 0 for s < t. Let us denote \(\tilde x(t)=\tilde x(t,s)h\) and \(V_{ t_i^-}(\theta ,s)=V( t_i+\theta ,s)\) when θ < 0 and \(V_{ t_i^-}(0,s)=V( t_i^-,s)\). From the integral equation (I.2.3) and the representation (I.2.14), one can carefully show after a serious of changes of variables and applications of Fubini’s theorem that

$$\displaystyle \begin{aligned} &\int_s^t V(t,\mu)h(\mu)d\mu {=}\!\int_s^t \left[I {+} \int_\mu^t L(\mu)V_\nu(\cdot,\mu)d\nu {+} \sum_{s< t_i\leq t}B(i) V_{ t_i^{-}}(\cdot,\mu)\right]h(\mu)d\mu. \end{aligned} $$

Since the above holds for all \(h\in \mathcal {L}_1([s,t],\mathbb {R}^n)\), the fundamental matrix V (t, s) satisfies

$$\displaystyle \begin{aligned} V(t,s)&=\left\{\begin{array}{ll}\displaystyle I+\int_s^t L(\mu)V_{\mu}(\cdot,s)d\mu + \sum_{s< t_i\leq t}B(i) V_{ t_i^-}(\cdot,s),&t\geq s\\ 0 & t<s,\end{array}\right. \end{aligned} $$
(I.2.15)

almost everywhere. By uniqueness of solutions (Lemma I.2.1.1), V (t, s)ξ = U(t, s)[χ 0 ξ](0) for all \(\xi \in \mathbb {R}^n\). Since \(\tilde x\) is an extension of x to \(\mathcal {L}_1([s,t],\mathbb {R}^n)\supset \mathcal {R}\mathcal {C}\mathcal {R}([s,t],\mathbb {R}^n)\), representation (I.2.14) holds for \(h\in \mathcal {R}\mathcal {C}\mathcal {R}([s,t],\mathbb {R}^n)\). Then, from the properties of V , one can verify that for all t ≥ s,

$$\displaystyle \begin{aligned} x_t(\theta;s,0;h,0)&=\tilde x(t+\theta,s)h\\ &=\int_s^t U(t,\mu)[\chi_0 h(\mu)](\theta)d\mu, \end{aligned} $$

which is what was claimed by Eq. (I.2.13). □

With Lemma I.2.3.1 through Lemma I.2.3.3 at hand, we arrive at the variation-of-constants formula.

Lemma I.2.3.4

Let \(h\in \mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},\mathbb {R}^n)\) . Under hypotheses H.1–H.2, one has the variation-of-constants formula

$$\displaystyle \begin{aligned} x_t(\theta;s,\phi;h,r){=}U(t,s)\phi(\theta) {+} \int_s^t U(t,\mu)[\chi_0 h(\mu)](\theta)d\mu {+} \sum_{s< t_i\leq t}U(t, t_i)[\chi_0 r_i](\theta). \end{aligned} $$
(I.2.16)

3.2 Variation-of-Constants Formula in the Space \(\mathcal {R}\mathcal {C}\mathcal {R}\)

The main result of the previous section—Lemma I.2.3.4—is a variation-of-constants formula in the Euclidean space. That is, for each θ ∈ [−r, 0], one can compute the right-hand side of (I.2.16), with the integral being that of a vector-valued function with codomain \(\mathbb {R}^n\). The goal of this section will be to reinterpret the variation-of-constants formula in such a way that the integral appearing therein may be thought of as a weak integral in the space \(\mathcal {R}\mathcal {C}\mathcal {R}\).

Lemma I.2.3.5

Let \(h\in \mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},\mathbb {R}^n)\) , and let hypotheses H.1–H.2 hold. The function \(U(t,\cdot )[\chi _0 h(\cdot )]:[s,t]\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) is Pettis integrable for all t  s and

$$\displaystyle \begin{aligned} \left[\int_s^t U(t,\mu)[\chi_0 h(\mu)]d\mu \right](\theta) = \int_s^t U(t,\mu)[\chi_0 h(\mu)](\theta)d\mu. \end{aligned} $$
(I.2.17)

Proof

By Lemma I.1.3.8 and the uniqueness assertion of Proposition I.1.4.1, if we can show for all \(p:[-r,0]\rightarrow \mathbb {R}^n\) of bounded variation that the equality

$$\displaystyle \begin{aligned} \int_{-r}^0 p^\intercal(\theta)d\left[ \int_s^t U(t,\mu)[\chi_0 h(\mu)](\theta)d\mu\right] = \int_s^t\left[\int_{-r}^0 p^\intercal(\theta)d\Big[U(t,\mu)[\chi_0 h(\mu)](\theta)\Big]\right]d\mu \end{aligned} $$

holds, then Lemma I.2.3.5 will be proven. Note that the above is equivalent to

$$\displaystyle \begin{aligned} \int_{-r}^0 p^\intercal(\theta)d\left[ \int_s^t V(t+\theta,\mu)h(\mu)d\mu\right] = \int_s^t\left[\int_{-r}^0 p^\intercal(\theta)d\Big[V(t+\theta,\mu)h(\mu)\Big]\right]d\mu. \end{aligned} $$
(I.2.18)

Suppose first that h is a step function. In this case, a consequence of Eq. (I.2.15) is that θV (t + θ, μ)h(μ) and μV (t + θ, μ)h(μ) are piecewise-continuous, while Lemmas I.2.1.1 and I.2.3.3 imply that \(\theta \mapsto \int _s^t V(t+\theta ,\mu )h(\mu )d\mu \) is also piecewise-continuous, all with at most finitely many discontinuities on any given bounded set. Consequently, both integrals in (I.2.18) can be regarded as the Lebesgue–Stieltjes integrals, with Fubini’s theorem granting the desired equality.

Given \(h\in \mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},\mathbb {R}^n)\), we approximate its restriction to the interval [s, t] by a convergent sequence of right-continuous step functions h n by Lemma I.1.3.1. Equation (I.2.18) is then satisfied with h replaced with h n. Define the functions

$$\displaystyle \begin{aligned} J_n(\theta)&=\int_s^t V(t+\theta,\mu)h_n(\mu)d\mu,&K_n(\mu)&=\int_{-r}^0 p^\intercal(\theta)d\Big[V(t+\theta,\mu)h_n(\mu)\Big],\\ J(\theta)&=\int_s^t V(t+\theta,\mu)h(\mu)d\mu,&K(\mu)&=\int_{-r}^0 p^\intercal(\theta)d\Big[V(t+\theta,\mu)h(\mu)\Big], \end{aligned} $$

so that \(\int _{-r}^0 p^\intercal (\theta ) dJ_n(\theta ) = \int _s^t K_n(\mu )d\mu \). Using Lemma I.2.2.1, we can get the inequality

$$\displaystyle \begin{aligned} |J_n(\theta)-J(\theta)|\leq||h_n-h||\int_s^t \exp\left(\int_\mu^t \ell(\nu)d\nu\right)d\mu, \end{aligned}$$

so J n → J uniformly. The conditions of Lemma I.1.3.5 are satisfied, and we have the limit

$$\displaystyle \begin{aligned} \int_{-r}^0 p^\intercal(\theta)dJ_n(\theta)\rightarrow\int_{-r}^0 p^\intercal(\theta)dJ(\theta). \end{aligned}$$

Conversely, for each μ ∈ [s, t], Lemma I.1.3.4 applied to the difference K n(μ) − K(μ) yields, together with Lemma I.2.2.1,

$$\displaystyle \begin{aligned}|K_n(\mu)-K(\mu)&|\leq (|p(0)|+|p(-r)|+\mbox{var}_{-r}^0 p)\left(\int_s^t \exp\left(\int_y^t \ell(\nu)d\nu\right)dy\right)||h_n-h||. \end{aligned} $$

Thus, K n → K uniformly, and the bounded convergence theorem implies \(\int _s^t K_n(\mu )d\mu \rightarrow \int _s^t K(\mu )d\mu \). Combining these results, Eq. (I.2.18) holds and the lemma is proven. □

Lemmas I.2.3.4 and I.2.3.5 together grant the variation-of-constants formula in the Banach space \(\mathcal {R}\mathcal {C}\mathcal {R}\).

Theorem I.2.3.1

Let hypotheses H.1–H.2 hold, and let \(h\in \mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},\mathbb {R}^n)\) . The unique solution \(t\mapsto x_t(\cdot ;s,\phi ;h,r)\in \mathcal {R}\mathcal {C}\mathcal {R}\) of the linear inhomogeneous impulsive system (I.2.1)(I.2.2) with initial condition x s(⋅;s, ϕ;h, r) = ϕ satisfies the variation-of-constants formula

$$\displaystyle \begin{aligned} x_t(\cdot;s,\phi;h,r)=U(t,s)\phi + \int_s^t U(t,\mu)[\chi_0 h(\mu)]d\mu + \sum_{s< t_i\leq t}U(t, t_i)[\chi_0 r_i], \end{aligned} $$
(I.2.19)

where the integral is interpreted in the Pettis sense and may be evaluated pointwise using (I.2.17).

As a simple corollary, if x is a solution defined on [s − r, ), we can express tx t defined on [s, ) as the solution of an abstract integral equation.

Corollary I.2.3.1

Let hypotheses H.1–H.2 hold, and let \(h\in \mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},\mathbb {R}^n)\) . Any solution \(x:[s-r,\infty )\rightarrow \mathbb {R}^n\) of the linear inhomogeneous impulsive system (I.2.1)(I.2.2) satisfies for all t  s the abstract integral equation

$$\displaystyle \begin{aligned} x_t=U(t,s)x_s + \int_s^t U(t,\mu)[\chi_0 h(\mu)]d\mu + \sum_{s< t_i\leq t}U(t, t_i)[\chi_0 r_i]. \end{aligned} $$
(I.2.20)

Equation (I.2.20) will be the key to defining mild solutions in Chap. 4 and, ultimately, will permit us to construct centre manifolds.

4 Stability

Stability (in the sense of Lyapunov) is a fundamental topic in dynamical systems. We remind the reader of its definition, which we will specify to the inhomogeneous linear system (I.2.1)–(I.2.2).

Definition I.2.4.1

We say that the inhomogeneous impulsive RFDE (I.2.1)–(I.2.2) is

  • exponentially stable if there exist K > 0, α > 0 and δ > 0 such that for all \(\phi ,\psi \in \mathcal {R}\mathcal {C}\mathcal {R}\) satisfying ||ϕ − ψ|| < δ, one has ||x t(⋅, s, ϕ) − x t(⋅, s, ψ)||≤ K||ϕ − ψ||e α(ts) for all t ≥ s;

  • stable if for all 𝜖 > 0 there exists δ > 0 such that for all \(\phi ,\psi \in \mathcal {R}\mathcal {C}\mathcal {R}\) satisfying ||ϕ − ψ|| < δ, one has ||x t(⋅, s, ϕ) − x t(⋅, s, ψ)|| < 𝜖 for all t ≥ s;

  • unstable if it is not stable.

A simple consequence of the superposition principle is that stability of the inhomogeneous equation can be directly inferred from the properties of the evolution family \(U(t,s):\mathcal {R}\mathcal {C}\mathcal {R}\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) associated with its homogeneous part.

Proposition I.2.4.1

The inhomogeneous impulsive RFDE (I.2.1)(I.2.2) is exponentially stable if and only if there exist K > 0 and α > 0 such that the evolution family \(U(t,s):\mathcal {R}\mathcal {C}\mathcal {R}\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) associated with the homogeneous part (I.2.4)(I.2.5) satisfies ||U(t, s)||≤ Ke α(ts) for t  s. It is stable if and only if the evolution family is bounded: there exists K > 0 such that ||U(t, s)||≤ K for t  s.

There are several analytical criteria in the literature that guarantee exponential stability of linear impulsive RFDE. Some of these are based on analytical estimates and variation-of-constants formulas [4, 18, 98, 156], while others are proven using Lyapunov–Razumikhin methods [149, 166]. Of course, nonlinear stability criteria can be applied to the linear equation (I.2.4)–(I.2.5) as well. We will discuss nonlinear stability further in Chap. 4.

5 Comments

This chapter contains results that appear in the paper Smooth centre manifolds for impulsive delay differential equations [31] by Church and Liu, published by Journal of Differential Equations in 2018. Most importantly, that publication contains the main results of Sect. I.2.3, as well as Lemmas I.2.1.1 and I.2.2.1.

The variation-of-constants formula (I.2.10) for functional differential equations due to Hale can be made rigorous in several ways, such as through adjoint semigroup theory and integrated semigroup theory. See the reference [62] for an overview on these ideas. In the autonomous setting, the textbook of Diekmann, Verduyn Lunel, van Gils and Walther [41] provides a very readable account based on adjoint semigroups. Here, we have proposed an arguably more elementary approach; use the phase space \(\mathcal {R}\mathcal {C}\mathcal {R}\) and treat the integral in the weak sense. It is not possible (or at least quite nontrivial) to interpret the integral in (I.2.20) as a strong integral in \(\mathcal {R}\mathcal {C}\mathcal {R}\) because μU(t, μ) is generally nowhere continuous from the left or right—see the discussion of Sect. I.2.2.2.

Variation-of-constants formulas for impulsive delay differential equations have appeared in the literature at various earlier points, but only in the context of Euclidean space integrals and only when the impulses did not contain delays. See, for instance, Gopalsamy and Zhang [53], Anokhin, Berezansky and Braverman [4] and Berezansky and Braverman [18] for some early instances.