In Sect. I.2.2.1 we introduced spectral separation for the evolution family \(U(t,s):\mathcal {R}\mathcal {C}\mathcal {R}\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) associated to a linear impulsive functional differential equation. This results in a decomposition of the phase space as the internal direct sum \(\mathcal {R}\mathcal {C}\mathcal {R}=\mathcal {R}\mathcal {C}\mathcal {R}_s(t)\oplus \mathcal {R}\mathcal {C}\mathcal {R}_c(t)\oplus \mathcal {R}\mathcal {C}\mathcal {R}_u(t)\) of three fibre bundles, respectively: the stable, centre and unstable fibre bundles. These can be thought of as time-varying vector spaces, and the evolution family restricted to these fibre bundles exhibits growth characteristics that are distinct. In the stable fibre bundle, solutions decay exponentially to zero in forward time. Solutions in the centre fibre bundle are defined for all time and exhibit at most subexponential growth in forward and backward time, and solutions in the unstable fibre bundle are also defined for all time and decay to zero exponentially in reverse time.

For a nonlinear impulsive functional differential equation, if the evolution family of the linearization of some equilibrium point is spectrally separated, the invariant fibre bundles are in some sense nonlinearly distorted by the nonlinearities in the vector field and jump map, and the result is a local stable, centre and unstable manifold. The centre manifold in particular contains useful information pertaining to small solutions near the equilibrium and can for this reason be used for the detection of bifurcations. The present chapter is devoted to several aspects of the centre manifold, including its existence, smoothness (in both the phase space and with respect to time), invariance, reduction principle, restricted dynamics and its approximation by Taylor expansion. In Chap. 7 we touch on aspects of the other classical invariant manifolds (although in less detail) and, in general, the dynamics of the nonautonomous process on restriction to them.

1 Preliminaries

The local centre manifold (and, indeed, every invariant manifold we consider in this monograph) is defined through the solution of a particular fixed-point equation. In order to formulate this equation correctly, we need to take into account the expected growth rates of solutions on the centre manifold, which, as we know from linear systems theory, may exhibit subexponential growth in both forward and reverse time. This section is devoted to the introduction of Banach spaces that satisfy these growth rate conditions, as well as some useful results from linear systems and substitution operators that we will later need to define the Lyapunov–Perron operator that will define the fixed-point equation.

1.1 Spaces of Exponentially Weighted Functions

Denote \(PC(\mathbb {R},\mathbb {R}^n)\) the set of functions \(f:\mathbb {R}\rightarrow \mathbb {R}^n\) that are continuous everywhere except for at times \(t\in \{ t_k:k\in \mathbb {Z}\}\) where they are continuous from the right and have limits on the left. Define a weighted norm \(||f||{ }_\eta = \sup _{t\in \mathbb {R}}e^{-\eta |t|}||\phi (t)||\) for functions \(f:\mathbb {R}\rightarrow X\) for Banach space X. We define an analogous norm for sequences indexed by \(\mathbb {Z}\).

$$\displaystyle \begin{aligned} \mathcal{P}\mathcal{C}^{\eta}&=\{\phi:\mathbb{R}\rightarrow\mathcal{R}\mathcal{C}\mathcal{R}:\phi(t) =f_t, f\in PC(\mathbb{R},\mathbb{R}^n),||\phi||{}_{\eta}<\infty\}\\ B^{\eta}(\mathbb{R},\mathcal{R}\mathcal{C}\mathcal{R})&=\{f:\mathbb{R}\rightarrow\mathcal{R}\mathcal{C}\mathcal{R} : ||f||{}_{\eta}<\infty\}\\ PC^\eta(\mathbb{R},\mathbb{R}^n)&=\{f\in PC(\mathbb{R},\mathbb{R}^n) : ||f||{}_\eta<\infty\}\\ B^{\eta}_{ t_k}(\mathbb{Z},\mathbb{R}^n)&=\{f:\mathbb{Z}\rightarrow\mathbb{R}^n : ||f||{}_{\eta}<\infty\}. \end{aligned} $$

Also, if \(\mathcal {M}\subset \mathbb {R}\times \mathcal {R}\mathcal {C}\mathcal {R}\) is a nonautonomous set over \(\mathcal {R}\mathcal {C}\mathcal {R}\), we define the space \(\mathcal {P}\mathcal {C}^{\eta }(\mathbb {R},\mathcal {M})\) of piecewise-continuous functions taking values in \(\mathcal {M}\) by

$$\displaystyle \begin{aligned}\mathcal{P}\mathcal{C}^{\eta}(\mathbb{R},\mathcal{M})=\{f\in \mathcal{P}\mathcal{C}^{\eta} : f(t)\in\mathcal{M}(t)\}.\end{aligned}$$

If X η is one of the above spaces, then the normed space X η, s = (X η, ||⋅||η,s) with norm

$$\displaystyle \begin{aligned}||F||{}_{\eta,s}=\left\{\begin{array}{ll} \sup_{t\in\mathbb{R}}e^{-\eta|t-s|}||F(t)||,&\mbox{dom}(F)=\mathbb{R}\\ \sup_{k\in\mathbb{Z}}e^{-\eta| t_k-s|}||F(k)||,&\mbox{dom}(F)=\mathbb{Z},\end{array}\right.\end{aligned}$$

is complete. Broadly speaking, elements of these spaces will be referred to as η-bounded.

1.2 η-Bounded Solutions from Inhomogeneities

In this section we will characterize the η-bounded solutions of the inhomogeneous linear equation

(I.5.1)

for inhomogeneous terms F and G. As defined in Definition I.1.1.6, we recall now that \(\mathcal {R}\mathcal {C}\mathcal {R}_c(t)=\mathcal {R}(P_c(t))\), where P c is the projection onto the centre bundle of the linear part of (I.4.1)–(I.4.2).

Lemma I.5.1.1

Let \(\eta \in (0,\min \{-a,b\})\) and let H.1, H.2 and H.5 hold. Then,

(I.5.2)

Proof

If \(\varphi \in \mathcal {R}\mathcal {C}\mathcal {R}_c(\nu )\), then P c(ν)φ = φ and the function x(t) = U(t, ν) P c(ν)φ = U c(t, ν)φ is defined for all \(t\in \mathbb {R}\), satisfies x(t) = U(t, s)x(s), x(ν) = φ, x(t)(θ) = x(t + θ)(0), and by choosing 𝜖 < η, there exists K > 0 such that

$$\displaystyle \begin{aligned}e^{-\eta|t|}||x(t)||\leq Ke^{\epsilon|\nu|}e^{-(\eta-\epsilon)|t|}||\varphi||\leq Ke^{\epsilon|\nu|}||\varphi||.\end{aligned}$$

Finally, as x(t) = [U(t, s)x(s)(0)]t for all \(t\in \mathbb {R}\), we conclude \(x\in \mathcal {P}\mathcal {C}^\eta \).

Conversely, suppose \(\varphi \in \mathcal {R}\mathcal {C}\mathcal {R}\) admits some \(x\in \mathcal {P}\mathcal {C}^\eta \) such that x(t) = U(t, s)x(s) and x(ν) = φ. Let \(||x||{ }_\eta =\overline K\). We will show that P s(ν)φ = P u(ν)φ = 0, from which we will conclude \(\varphi \in \mathcal {R}\mathcal {C}\mathcal {R}_c(\nu )\).

By spectral separation, we have for all ρ < ν,

$$\displaystyle \begin{aligned} e^{-\eta|\rho|}||P_s(\nu)\varphi||&=e^{-\eta|\rho|}||U_s(\nu,\rho)P_s(\rho)x(\rho)||\\ &\leq e^{-\eta|\rho|} Ke^{a(\nu-\rho)}||P_s(\rho)||\cdot||x(\rho)||\\ &\leq K\overline Ke^{a(\nu-\rho)}||P_s(\rho)||, \end{aligned} $$

which implies \(||P_s(\nu )\varphi ||\leq K\overline K e^{a\nu }||P_s(\rho )||\exp (\eta |\rho |-a\rho ).\) Since η < −a and ρ↦||P s(ρ)|| is bounded, taking the limit as ρ →− we obtain ||P s(ν)φ|||≤ 0. Similarly, for ρ > ν, we have

$$\displaystyle \begin{aligned} e^{-\eta|\rho|}||P_u(\nu)\varphi||&=e^{-\eta|\rho|}||U_u(\nu,\rho)P_u(\rho)x(\rho)||\\ &\leq e^{-\eta|\rho|}Ke^{b(\nu-\rho)}||P_u(\rho)||\cdot||x(\rho)||\\ &\leq K\overline Ke^{b(\nu-\rho)}||P_u(\rho)||, \end{aligned} $$

which implies \(||P_u(\nu )\varphi ||\leq K\overline Ke^{b\nu }||P_u(\rho )||\exp (\eta |\rho |-b\rho )\). Since η < b and ρ↦||P u(ρ)|| is bounded, taking the limit ρ → we obtain ||P u(ν)φ||≤ 0. Therefore, P s(ν)φ = P u(ν)φ = 0, and we conclude that P c(ν)φ = φ and \(\varphi \in \mathcal {R}\mathcal {C}\mathcal {R}_c(\nu )\). □

Lemma I.5.1.2

Let conditions H.1, H.2 and H.5 be satisfied. Let \(h\in \mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},\mathbb {R}^n)\) . The integrals

are well-defined as Pettis integrals for all \(s,t,v\in \mathbb {R}\) , where we define \(\int _b^a f d\mu = -\int _a^b f d\mu \) when a < b.

Proof

The nontrivial cases are where t ≤ s and t ≤ v. For the former, defining H(μ) = χ 0 h(μ) we have the string of equalities

$$\displaystyle \begin{aligned} U_c(t,s)P_c(s)\int_t^s U(s,\mu)H(\mu)d\mu&=U_c(t,s)\int_t^s U_c(s,\mu)P_c(\mu)H(\mu)d\mu\\ &=\int_t^s U_c(t,\mu)P_c(\mu)H(\mu)d\mu\\ &=\int_t^s U(t,\mu)P_c(\mu)H(\mu)d\mu\\ &=-\int_s^t U(t,\mu)P_c(\mu)H(\mu)d\mu. \end{aligned} $$

The first integral on the left exists due to Lemma I.2.3.5 and Proposition I.1.4.1. The subsequent equalities follow by Proposition I.2.3.5 and the definition of spectral separation. The case t ≤ v for the other integral is proven similarly. □

Define the (formal) linear operators \(\mathcal {K}_s^{\eta }:\mathcal {P}\mathcal {C}^{\eta ,s}\oplus B^\eta _{ t_k}(\mathbb {Z},\mathbb {R}^n){\rightarrow }B^\eta (\mathbb {R},\mathcal {R}\mathcal {C}\mathcal {R})\) by the equation

(I.5.3)

indexed by \(s\in \mathbb {R}\), where the external direct sum \(\mathcal {P}\mathcal {C}^{\eta ,s}\oplus B^{\eta ,s}_{ t_k}(\mathbb {Z},\mathbb {R}^n)\) is identified as a Banach space with norm ||(f, g)||η,s = ||f||η,s + ||g||η,s, and the summations are defined as follows:

$$\displaystyle \begin{aligned}\sum_a^b F( t_i)dt_i = \left\{\begin{array}{ll} \displaystyle \sum_{a< t_i\leq b}F( t_i),&a\leq b\\ \displaystyle -\sum_b^a F( t_i)dt_i,&b<a. \end{array}\right.\end{aligned}$$

Lemma I.5.1.3

Let H.1, H.2, H.5 and H.7 hold, and let \(\eta \,{\in }\,(0,\min \{-a,b\})\).

  1. 1.

    The function \(\mathcal {K}^\eta _s:\mathcal {P}\mathcal {C}^{\eta ,s}\oplus B^{\eta ,s}_{ t_k}(\mathbb {Z},\mathbb {R}^n)\rightarrow B^{\eta ,s}(\mathbb {R},\mathcal {R}\mathcal {C}\mathcal {R})\) with η \((0,\min \{-a,b\})\) and defined by formula (I.5.3) is linear and bounded. In particular, the norm satisfies

    $$\displaystyle \begin{aligned} ||\mathcal{K}^\eta_s|| &\leq C\left[\frac{1}{\eta-\epsilon}\left(1+\frac{e^{(\eta-\epsilon)\xi}}{\xi}\right) + \frac{1}{-a-\eta}\left(1+\frac{2e^{(\eta-a)\xi}}{\xi}\right)+\frac{1}{b-\eta}\left(1+\frac{2e^{(b+\eta)\xi}}{\xi}\right)\right] \end{aligned} $$
    (I.5.4)

    for some constants C and 𝜖 independent of s.

  2. 2.

    \(\mathcal {K}_s^\eta \) has range in \(\mathcal {P}\mathcal {C}^{\eta ,s}\) , and \(v=\mathcal {K}_s^\eta (F,G)\) is the unique solution of (I.5.1) in \(\mathcal {P}\mathcal {C}^{\eta ,s}\) satisfying P c(s)v(s) = 0.

  3. 3.

    The expression \(\mathcal {K}_*(F,G)(t)=(I-P_c(t))K^0_s(F,G)(t)\) uniquely defines, independent of s, a bounded linear map

    $$\displaystyle \begin{aligned}\mathcal{K}_*:\mathcal{P}\mathcal{C}^0\oplus B^0_{ t_k}(\mathbb{Z},\mathbb{R}^n)\rightarrow \mathcal{P}\mathcal{C}^0.\end{aligned}$$

Proof

Let \(\epsilon <\min \{\min \{-a,b\}-\eta ,\eta \}\). To show that \(\mathcal {K}_s^\eta \) is well-defined, we start by mentioning that all improper integrals and infinite sums appearing on the right-hand side of (I.5.3) can be interpreted as limits of well-defined finite integrals and sums, due to Lemma I.2.3.5, Lemma I.5.1.2 and Proposition I.1.4.1. For brevity, write

$$\displaystyle \begin{aligned}\mathcal{K}_s^\eta(F,G)=\left(K_1^{u,f} - K_1^{c,F} + K_1^{u,F}\right) + \left(K_2^{u,G}-K_2^{c,G}+K_2^{s,G}\right),\end{aligned}$$

where each term corresponds to the one in (I.5.3) in order of appearance.

We start by proving the convergence of the improper integrals. Denote

$$\displaystyle \begin{aligned}I(v)=\int_t^v U(v,\mu)P_u(\mu)[\chi_0 F(\mu)]d\mu,\end{aligned}$$

and let . We have, for m > n and n sufficiently large so that v m > 0,

$$\displaystyle \begin{aligned} ||I(v_m)-I(v_n)||&\leq\int_{v_n}^{v_m}KNe^{b(t-\mu)}|F(\mu)|d\mu\\ &\leq\int_{v_n}^{v_m}KNe^{b(t-\mu)}e^{\eta\mu}||F||{}_\eta d\mu\\ &=KN||F||{}_\eta e^{bt}\int_{v_n}^{v_m}e^{\mu(\eta-b)}d\mu\\ &=\frac{KN||F||{}_\eta}{b-\eta} e^{bt}\left(e^{-v_n(b-\eta)}-e^{-v_m(b-\eta)}\right)\\ &\leq \frac{KN||F||{}_\eta}{b-\eta}e^{bt}e^{-v_n(b-\eta)}. \end{aligned} $$

Therefore, \(I(v_k)\in \mathcal {R}\mathcal {C}\mathcal {R}\) is Cauchy and thus converges; namely, it converges to the improper integral K u, F(t). One can similarly prove that K s, F(t) converges. For the infinite sums, we employ similar estimates; if we denote \(S=\sum _{t< t_i<\infty }||U_u(t, t_i)[\chi _0 G_i]||\) and assume without loss of generality that t 0 = 0, a fairly crude estimate (that we will later improve) yields

$$\displaystyle \begin{aligned} S&\leq\sum_{t< t_i<\infty}KNe^{b(t- t_i)}e^{\eta| t_i|}||G||{}_\eta\\ &=\sum_{-|t|< t_i\leq 0}KN||G||{}_\eta e^{bt}e^{| t_i|(b+\eta)} + \sum_{0< t_k<\infty}KN||G||{}_\eta e^{bt}e^{-(b-\eta) t_i}\\ &\leq KNe^{bt}\left(\frac{|t|}{\xi}e^{|t|(b+\eta)} + \frac{1}{1-e^{-(b-\eta)\xi}}\right)||G||{}_\eta. \end{aligned} $$

Thus, K u, G(t) converges uniformly. One can show by similar means that K s, F(t) and K s, G(t) both converge. Therefore, \(\mathcal {K}_s^\eta (F,G)(t)\in \mathcal {R}\mathcal {C}\mathcal {R}\) exists. We can now unambiguously state that \(\mathcal {K}_s^\eta \) is clearly linear.

Our next task is to prove that \(||K^\eta _s(F,G)]||{ }_{\eta ,s}\leq Q||(F,G)||{ }_{\eta ,s}\) for constant Q satisfying the estimate of equation (I.5.4). We will prove the bounds only for ||K u, F||η,s, ||K u, G||η,s, ||K c, F||η,s and ||K c, G||η,s; the others follow by similar calculations. For t < s, we have

$$\displaystyle \begin{aligned} & e^{-\eta|t-s|}||K^{u,F}(t)||\\ &\leq e^{-\eta|t-s|}\int_{t}^\infty KNe^{b(t-\mu)}|F(\mu)|d\mu\\ &\quad \leq e^{\eta (t-s)}KN\left[\int_{t}^se^{b(t-\mu)}e^{\eta|\mu-s|}||F||{}_{\eta,s}d\mu + \int_s^\infty e^{b(t-\mu)}e^{\eta|\mu-s|}||F||{}_{\eta,s}d\mu \right]\\ &\quad =e^{\eta(t-s)}KN||F||{}_{\eta,s}\left[\int_t^s e^{b(t-\mu)}e^{\eta(s-\mu)}d\mu + \int_s^\infty e^{b(t-\mu)}e^{\eta(\mu-s)}d\mu \right]\\ &\quad =e^{\eta(t-s)}KN||F||{}_{\eta,s}\left[e^{bt+\eta s}\frac{e^{-(b+\eta)t}-e^{-(b+\eta)s}}{b+\eta} + e^{bt-\eta s}\frac{e^{-(b-\eta)s}}{b-\eta} \right]\\ &\quad \leq KN||F||{}_{\eta,s}\frac{1}{b-\eta}. \end{aligned} $$

The above inequality is also satisfied for t ≥ s, and we conclude ||K u, F||η,s ≤ KN(bη)−1||(F, G)||η,s. Next, for t < s,

$$\displaystyle \begin{aligned} & e^{-\eta|t-s|}||K^{u,G}(t)||\\ &\quad \leq e^{-\eta|t-s|}\sum_{t< t_i<\infty}KNe^{b(t- t_i)}|G_i|\\ &\quad \leq e^{\eta(t-s)}KN\left[ \sum_{t< t_i<s} e^{b(t- t_i)}e^{\eta| t_i-s|}||G||{}_{\eta,s} {+} \sum_{s\leq{{t}}_i<\infty}e^{b(t- t_i)}e^{\eta| t_i-s|}||G||{}_{\eta,s} \right]\\ &\quad \leq e^{\eta(t-s)}KN||G||{}_{\eta,s}\frac{1}{\xi}\left[\int_{t-\xi}^s e^{b(t-\mu)}e^{\eta(s-\mu)}d\mu + \int_{s-\xi}^\infty e^{b(t-\mu)}e^{\eta(\mu-s)}d\mu\right]\\ &\quad \leq e^{\eta(t-s)}\frac{KN||G||{}_{\eta,s}}{\xi}\left[e^{bt+\eta s}\frac{e^{-(b+\eta)(t-\xi)}-e^{-(b+\eta)x}}{b+\eta} + e^{bt-\eta s}\frac{e^{-(b-\eta)(s-\xi)}}{b-\eta}\right]\\ &\quad \leq \frac{2KN||G||{}_{\eta,s}}{\xi(b-\eta)}\cdot e^{(b+\eta)\xi}, \end{aligned} $$

where we have made use of Lemma I.1.5.2 to estimate the sums. The same conclusion is valid for t ≥ s, and it follows that ||K u, G||η,s ≤ 2KNe (b+η)ξ(ξ(bη))−1||(F, G)||η,s. Next, for t ≤ s,

$$\displaystyle \begin{aligned} e^{-\eta|t-s|}||K^{c,G}(t)||&\leq e^{\eta(t-s)} KN||G||{}_{\eta,s}\sum_{t< t_i\leq s}e^{\epsilon( t_i-t)}e^{\eta(s- t_i)}\\ &\leq e^{\eta(t-s)}\frac{KN||G||{}_{\eta,s}}{\xi}\int_{s-\xi}^t e^{\epsilon(\mu-t)}e^{\eta(s-\mu)}d\mu\\ &=e^{\eta(t-s)}\frac{KN||G||{}_{\eta,s}}{\xi(\eta-\epsilon)}\left(e^{\epsilon(s-\xi-t)}e^{\eta\xi} - e^{-\eta(t-s)} \right)\\ &\leq \frac{KN||G||{}_{\eta,s}}{\xi(\eta-\epsilon)}e^{(\eta-\epsilon)\xi}. \end{aligned} $$

This estimate continues to hold for all \(t,s\in \mathbb {R}\). To compare to the integral term, for s ≤ t, we have

$$\displaystyle \begin{aligned} e^{-\eta|t-s|}||K^{c,F}(t)||&\leq e^{-\eta(t-s)}KN||F||{}_{\eta,s}\int_s^te^{\epsilon(t-\mu)}e^{\eta(\mu-s)}d\mu\\ &=e^{-\eta(t-s)}KN||F||{}_{\eta,s}\frac{1}{\eta-\epsilon}\left(e^{\eta(t-s)}-e^{\epsilon(t-s)}\right)\\ &\leq\frac{KN||F||{}_{\eta,s}}{\eta-\epsilon}, \end{aligned} $$

and this estimate persists for all \(t,s\in \mathbb {R}\). Similar estimates for the other integrals and sums appearing in (I.5.3) ultimately result in the bound appearing in (I.5.4). This proves part 1.

To prove part 2, denote \(v=\mathcal {K}_s^\eta (F,G)\). It is clear from the definition of v, the orthogonality of the projection operators and Proposition I.1.4.1 that P c(s)v(s) = 0. Also, for all − < z ≤ t < , denoting \(\overline F=\chi _0 F\) and \(\overline G_i=\chi _0 G\), we have

so that tv(t) solves the integral equation (I.5.1). This also demonstrates that \(v\in \mathcal {P}\mathcal {C}^\eta \). To show that it is the only solution in \(\mathcal {P}\mathcal {C}^\eta \) satisfying P c(s)v(s) = 0, suppose there is another r ∈ PC η that satisfies P c(s)r(s) = 0. Then the function w := v − r is an element of \(\mathcal {P}\mathcal {C}^\eta \) that satisfies w(t) = U(t, z)w(z) for − < z ≤ t < . By Lemma I.5.1.1, we have \(w(s)\in \mathcal {R}\mathcal {C}\mathcal {R}_c(s)\). But since P c(s)w(s) = 0 and P c(s) is the identity on \(\mathcal {R}\mathcal {C}\mathcal {R}_c(s)\), we obtain w(s) = 0. Therefore, w(t) = U(t, s)0 = U c(t, s)0 = 0 for all \(t\in \mathbb {R}\), and we conclude v = r, proving the uniqueness assertion.

For assertion 3, we compute first

Routine estimation using inequalities (I.1.11)–(I.1.13) together with Lemma I.1.5.2 produces the bound

$$\displaystyle \begin{aligned}||\mathcal{K}_*(F,G)(t)||\leq KN\left(\frac{-1}{a}+\frac{1}{b}-\frac{e^{-a\xi}}{a\xi}+\frac{e^{b\xi}}{b\xi}\right)||(F,G)||,\end{aligned}$$

and as the bound is independent of t, s, the result is proven. □

1.3 Substitution Operator and Modification of Nonlinearities

Let \(\xi :\mathbb {R}_+\rightarrow \mathbb {R}\) be a C bump function satisfying

  1. i)

    ξ(y) = 1 for 0 ≤ y ≤ 1,

  2. ii)

    0 ≤ ξ(y) ≤ 1 for 1 ≤ y ≤ 2,

  3. iii)

    ξ(y) = 0 for y ≥ 2.

We modify the nonlinearities of (I.4.1)–(I.4.2) in the centre and hyperbolic directions separately. For δ > 0 and \(s\in \mathbb {R}\), we let

$$\displaystyle \begin{aligned}F_{\delta,s}(t,x)&=f(t,x)\xi\left(\frac{||P_c(s)x||}{N\delta}\right)\xi\left(\frac{||(P_s(s)+P_u(s))x||}{N\delta}\right) \end{aligned} $$
(I.5.5)
$$\displaystyle \begin{aligned}G_{\delta,s}(k,x)&=g(k,x_{0^-})\xi\left(\frac{||P_c(s)x_{0^-}||}{N\delta}\right)\xi\left(\frac{||(P_s(s)+P_u(s))x_{0^-}||}{N\delta}\right). \end{aligned} $$
(I.5.6)

Notice that G δ,s(k, x) takes the pointwise left-limit in the evaluation (I.5.6). The proof of the following lemma and corollary will be omitted. They can be proven by emulating the proof of Lemma 6.1 from [70] and taking into account the uniform boundedness of the projectors P i; see property 1 of Definition I.1.1.6.

Lemma I.5.1.4

Let f(t, ⋅) and g(k, ⋅) be uniformly (in \(t\in \mathbb {R}\) and \(k\in \mathbb {Z}\) ) Lipschitz continuous on the ball \(B_{\mathcal {R}}\mathcal {C}\mathcal {R}(\delta ,0)\) in \(\mathcal {R}\mathcal {C}\mathcal {R}\) with mutual Lipschitz constant L(δ), and let f(t, 0) = g k(0) = 0. The functions

are globally, uniformly (in \(t\in \mathbb {R}\) and \(k\in \mathbb {Z}\) ) Lipschitz continuous with mutual Lipschitz constant L δ that satisfies L δ → 0 as δ → 0, independent of s.

Corollary I.5.1.1

The substitution operator

$$\displaystyle \begin{aligned} R_{\delta,s}:\mathcal{P}\mathcal{C}^{\eta,s}\rightarrow PC^{\eta,s}(\mathbb{R},\mathbb{R}^n)\oplus B^{\eta,s}_{ t_k}(\mathbb{Z},\mathbb{R}^n)\end{aligned}$$

defined by R δ,s(x)(t, k) = (F δ,s(t, x(t)), G δ,s(k, x(t k))) is globally Lipschitz continuous with Lipschitz constant \(\tilde L_\delta \) that satisfies \(\tilde L_\delta \rightarrow 0\) as δ → 0. Moreover, the Lipschitz constant is independent of η, s.

Corollary I.5.1.2

||(F δ,s(t, x), G δ,s(k, x))||≤ 4δL δ for all \(x\in \mathcal {R}\mathcal {C}\mathcal {R}\) and \((t,k)\in \mathbb {R}\times \mathbb {Z}\).

Remark I.5.1.1

The explicit connection between L(δ) (the Lipschitz constant for f and g) and L δ and \(\tilde L_\delta \) is complicated and depends in part on the choice of cutoff function ξ and the constant N.

2 Fixed-Point Equation and Existence of a Lipschitz Centre Manifold

Let \(\eta \in (\epsilon ,\min \{-a,b\})\) and define a mapping \(\mathcal {F}_s:\mathcal {P}\mathcal {C}^{\eta ,s}\times \mathcal {R}\mathcal {C}\mathcal {R}_c(s)\rightarrow \mathcal {P}\mathcal {C}^{\eta ,s}\) by

$$\displaystyle \begin{aligned} \mathcal{F}_s(u,\varphi)=U(\cdot,s)\varphi + \mathcal{K}^\eta_s( R_{\delta,s}(u)). \end{aligned} $$
(I.5.7)

Note that by Lemma I.5.1.3 and Corollary I.5.1.1, the operator is well-defined, \(\mathcal {K}_s^\eta \) is bounded and R δ is globally Lipschitz continuous for each δ > 0, provided H.1–H.7 hold. Choose δ small enough so that

$$\displaystyle \begin{aligned} \tilde L_\delta||K^\eta_s||{}_\eta<\frac{1}{2}.\end{aligned} $$
(I.5.8)

Notice that δ can be chosen so that (I.5.8) can be satisfied independent of s, due to Lemma I.5.1.3. If ||φ|| < r∕(2K), then \(\mathcal {F}_s(\cdot ,\varphi )\) leaves \(\overline {B(r,0)}\subset \mathcal {P}\mathcal {C}^{\eta ,s}\) invariant. Moreover, \(\mathcal {F}_s(\cdot ,\varphi )\) is Lipschitz continuous with Lipschitz constant \(\frac {1}{2}\). One may notice that r is arbitrary. We can now prove the following:

Theorem I.5.2.1

Let conditions H.1–H.7 hold. If δ is chosen as in (I.5.8), then there exists a globally Lipschitz continuous mapping \(u^*_s:\mathcal {R}\mathcal {C}\mathcal {R}_c(s)\rightarrow \mathcal {P}\mathcal {C}^{\eta ,s}\) such that \(u_s=u^*_s(\varphi )\) is the unique solution in \(\mathcal {P}\mathcal {C}^{\eta ,s}\) of the equation \(u_s=\mathcal {F}_s(u_s,\varphi ).\)

Proof

The discussion preceding the statement of Theorem I.5.2.1 indicates that \(\mathcal {F}_s(\cdot ,\varphi )\) is a contraction mapping on \(\overline {B(r,0)}\subset \mathcal {P}\mathcal {C}^{\eta ,s}\) for every r > ||φ||2K. Since the latter is a closed subspace of the Banach space \(\mathcal {P}\mathcal {C}^{\eta ,s}\), the contraction mapping principle implies the existence of the function \(u^*_s\). To show that it is a Lipschitz continuous, we note

$$\displaystyle \begin{aligned} ||u^*_s(\varphi)-u^*_s(\psi)||{}_{\eta,s}&=||\mathcal{F}_s(u^*_s(\varphi),\varphi)-\mathcal{F}_s(u^*_s(\psi),\psi))||{}_{\eta,s}\\ &\leq K||\varphi-\psi||+\frac{1}{2}||u^*_s(\varphi)-u^*_s(\psi)||{}_{\eta,s}. \end{aligned} $$

Therefore, \(u^*_s\) is Lipschitz continuous with Lipschitz constant 2K. □

Definition I.5.2.1 (Lipschitz Centre Manifold)

The centre manifold, \(\mathcal {W}_c\subset \mathbb {R}\times \mathcal {R}\mathcal {C}\mathcal {R}\), is the nonautonomous set whose t-fibres for \(t\in \mathbb {R}\) are given by

$$\displaystyle \begin{aligned}{\mathcal{W}}_c(t)=\mbox{Im}\{\mathcal{C}(t,\cdot)\},\end{aligned} $$
(I.5.9)

where \(\mathcal {C}:\mathcal {R}\mathcal {C}\mathcal {R}_c\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) is the (fibrewise) Lipschitz map defined by \(\mathcal {C}(t,\phi )=u^*_t(\phi )(t)\). Its dimension is equal to \(\dim (\mathcal {R}\mathcal {C}\mathcal {R}_c)\).

Remark I.5.2.1

The centre manifold depends non-canonically on the choice of cutoff function from Sect. I.5.1.3. That is, the centre manifold is not unique, so we are committing an abuse of syntax by referring to such a construct generally as “the” centre manifold. One must always understand that the definition of the centre manifold is with respect to a particular cutoff function. Also, since \(C(t,\cdot ):\mathcal {R}\mathcal {C}\mathcal {R}_c(t)\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) has a \(\dim (\mathcal {R}\mathcal {C}\mathcal {R}_c(t))\)-dimensional domain, it is appropriate to say that the centre manifold also has this dimension.

The construction above implies the centre manifold is fibrewise Lipschitz. We can prove a stronger result, namely that the Lipschitz constant can be chosen independent of the given fibre.

Corollary I.5.2.1

There exists a constant L > 0 such that \(||\mathcal {C}(t,\phi )-\mathcal {C}(t,\psi )||\leq L||\phi -\psi ||\) for all \(t\in \mathbb {R}\) and \(\phi ,\psi \in \mathcal {R}\mathcal {C}\mathcal {R}_c(t)\).

Proof

Denote u ϕ = u t(ϕ) and u ψ = u t(ψ). A preliminary estimation appealing to the fixed-point equation (I.5.7) yields

$$\displaystyle \begin{aligned}||\mathcal{C}(t,\phi)-\mathcal{C}(t,\psi)||\leq ||\phi-\psi|| + ||(\mathcal{K}^\eta_t(R_\delta u^\phi)-\mathcal{K}^\eta_t(R_\delta u^\psi))(t)||.\end{aligned}$$

By Corollary I.5.1.2, each of R δ u ϕ and R δ u ψ is uniformly bounded, so Lemma I.5.1.3 implies the existence of a constant c > 0 such that

$$\displaystyle \begin{aligned} ||(\mathcal{K}^\eta_t(R_{\delta,t} u^\phi)-\mathcal{K}^\eta_t(R_{\delta,t} u^\psi))(t)||&\leq c ||(R_{\delta,t} u^\phi-R_{\delta,t} u^\psi)(t)||\\ &\leq c\sup_{s\in\mathbb{R}}||(R_{\delta,t} u^\phi - R_{\delta,t} u^\psi)(s)||e^{-\eta|t-s|}\\ &\leq c \tilde L_\delta ||u^\phi-u^\psi||{}_{\eta,t}\\ &\leq c\tilde L_\delta 2K||\phi-\psi||, \end{aligned} $$

and in the last line, we used the Lipschitz constant from Theorem I.5.2.1. Combining this result with the previous estimate for \(||\mathcal {C}(t,\phi )-\mathcal {C}(t,\psi )||\) yields the uniform Lipschitz constant. By Corollary I.5.1.1, the Lipschitz constant has the claimed property. □

2.1 A Remark on Centre Manifold Representations: Graphs and Images

Our initial definition of the centre manifold was as the fibre bundle whose t-fibres are the images of \(\mathcal {C}(t,\cdot )\). However, sometimes one likes to think of the centre manifold as being the graph of a function. To accomplish this, one can use the hyperbolic part. Let us define the function \(\mathcal {H}:\mathcal {R}\mathcal {C}\mathcal {R}_c\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) by \(\mathcal {H}(t,\phi )=(I-P_c(t))\mathcal {C}(t,\phi )\). In this way, the centre manifold can be identified with the graph of the hyperbolic part of the centre manifold. Indeed, by part 2 of Lemma I.5.1.3, we have the decomposition \(\mathcal {C}(t,\phi )=\phi + (I-P_c(t))\mathcal {C}(t,\phi )\), so that

$$\displaystyle \begin{aligned} \mathcal{W}_c(t)&=\{\phi + \mathcal{H}(t,\phi):\phi\in\mathcal{R}\mathcal{C}\mathcal{R}_c(t)\}\\ &\sim \{(\phi,\mathcal{H}(t,\phi)):\phi\in\mathcal{R}\mathcal{C}\mathcal{R}_c(t)\}=\mbox{Graph}(\mathcal{H}(t,\cdot)). \end{aligned} $$

Since \(\mathcal {R}\mathcal {C}\mathcal {R}_c(t)\) and its complement \(\mathcal {R}(I-P_c(t))=\mathcal {R}\mathcal {C}\mathcal {R}_s(t)\oplus \mathcal {R}\mathcal {C}\mathcal {R}_u(t)\) have only 0 in their intersection, this identification makes sense. When one reduces down to ordinary differential equations, one usually thinks of precisely the function \(\mathcal {H}\) as being the centre manifold. This ambiguity between the function \(\mathcal {C}:\mathcal {R}\mathcal {C}\mathcal {R}_c\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\), the fibre bundle \(\mathcal {W}_c\), the hyperbolic part \(\mathcal {H}:\mathcal {R}\mathcal {C}\mathcal {R}_c\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) and its graph can sometimes make statements about centre manifolds imprecise. In this thesis, the term centre manifold without any additional qualifiers will always mean the fibre bundle \(\mathcal {W}_c\).

3 Invariance and Smallness Properties

Recall that by Lemma I.4.1.1, there is a process \((S,\mathcal {M})\) on \(\mathcal {R}\mathcal {C}\mathcal {R}\) such that tS(t, s)ϕ is the unique mild solution of (I.4.3) through the initial condition (s, ϕ) defined on an interval [s, s + α). With this in mind, the centre manifold is locally positively invariant with respect to \(\mathcal {S}\).

Theorem I.5.3.1 (Centre Manifold: Invariance and Inclusion of Bounded Orbits)

Let conditions H.1–H.7 hold. The centre manifold \(\mathcal {W}_c\) enjoys the following properties.

  1. 1.

    \(\mathcal {W}_c\) is locally positively invariant: if \((s,\phi )\in \mathcal {W}_c\) and ||S(t, s)ϕ|| < δ for t ∈ [s, T], then \((t,S(t,s)\phi )\in \mathcal {W}_c\) for t ∈ [s, T].

  2. 2.

    If \((s,\phi )\in \mathcal {W}_c\) , then \(S(t,s)\phi \,{=}\, u^*_t(P_c(t)S(t,s)\phi )(t)\,{=}\,\mathcal {C}(t,P_c(t)S(t,s)\phi )\).

  3. 3.

    If \((s,\phi )\in \mathcal {W}_c\) , there exists a unique mild solution \(u\in \mathcal {P}\mathcal {C}^{\eta ,s}\) of the semilinear system

    $$\displaystyle \begin{aligned} \dot x&=L(t)x_t + F_{\delta,s}(t,x_t),&t&\neq t_k\\ \varDelta x&=B(k)x_{t^-} + G_{\delta,s}(k,x_{t^-}),&t&=t_k \end{aligned} $$

    with the property that \(u(t)\in \mathcal {W}_c(t)\) for \(t\in \mathbb {R}\), ||u||η,s ≤ δ and u(s) = ϕ.

  4. 4.

    If \(x:\mathbb {R}\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) is a mild solution of (I.4.3) satisfying ||x|| < δ, then \((t,x(t))\in \mathcal {W}_c\) for all \(t\in \mathbb {R}\).

  5. 5.

    \(\mathbb {R}\times \{0\}\subset \mathcal {W}_c\) and \(\mathcal {C}(t,0)=0\) for all \(t\in \mathbb {R}\).

Proof

Let \((s,\phi )\in \mathcal {W}_c\) and denote x(t) = S(t, s)ϕ, with ||x|| < δ. Since \((s,\phi )\in \mathcal {W}_c\), there exists \(\varphi \in \mathcal {R}\mathcal {C}\mathcal {R}_c(s)\) such that \(\phi = u_s^*(\varphi )(s)\). Define \(\hat x=u_s^*(\varphi )\). Then, it follows that φ = P c(s)ϕ, \(\hat x(s)=\phi =P_c(s)\phi + K^\eta _s( R(\hat x))(s)\), and

$$\displaystyle \begin{aligned} \hat x(t)&=U(t,s)\varphi + \mathcal{K}^\eta_s( R_\delta(\hat x))(t)\\ &=U(t,s)\varphi + \left[\displaystyle U(t,s)K^\eta_s( R_{\delta,s}(\hat x))(s) + \int_s^t U(t,\mu)\chi_0 F_{\delta,s}(\mu,\hat x(\mu))d\mu\right.\\ &\left. \quad + \sum_{s< t_i\leq t}U(t, t_i)\chi_0 G_{\delta,s}(i,\hat x( t_i))\right]\\ &=U(t,s)\hat x(s) + \int_s^t U(t,\mu)\chi_0 F_{\delta,s}(\mu,\hat x(\mu))d\mu + \sum_{s< t_i\leq t}U(t, t_i)\chi_0 G_{\delta,s}(i,\hat x( t_i)) \end{aligned} $$

for all t ∈ [s, T]. But since ||x(t)|| < δ on [s, T], uniqueness of mild solutions (Lemma I.2.1.1 with Theorem I.2.3.1) implies that \(x=\hat x|{ }_{[s,T]}\).

Let v ∈ [s, T] and define \(z:\mathbb {R}\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) by \(z=\hat x - U(\cdot ,v)P_c(v)\hat x(v)\). One can easily verify that

$$\displaystyle \begin{aligned} z(t)&=U(t,v)z(v)+\int_v^t U(t,\mu)U(t,\mu)\chi_0 F_{\delta,s}(\mu,\hat x(\mu))d\mu\\ &+ \sum_{v< t_i\leq t}U(t, t_i)\chi_0 G_{\delta,s}(i,\hat x( t_i)) \end{aligned} $$

for all t ∈ [v, ) and that P c(v)z(v) = 0. On the other hand, since \(||\hat x||<\delta \), we have \(R_{\delta ,s}(\hat x)=R_{\delta ,v}(\hat x)\). From these two observations and Lemma I.5.1.3, \(z=\mathcal {K}^\eta _v( R_{\delta ,v}(\hat x))|{ }_{[v,\infty )}\), so that we may write

$$\displaystyle \begin{aligned}\hat x=U(\cdot,v)P_c(v)\hat x(v) + \mathcal{K}^\eta_v( R_{\delta,v}(\hat x)) = u^*_v(P_c(v)\hat x(v)).\end{aligned}$$

Therefore, \(\hat x(v)=u^*_v(P_c(v)\hat x(v))(v)\), and since \(x(v)=\hat x(v)\), this proves that \((v,x(v))\in \mathcal {W}_c\) and, through essentially the same proof, that

$$\displaystyle \begin{aligned}x(v)=u^*_v(P_c(v)x(v))(v)=\mathcal{C}(v,x(v))(v).\end{aligned}$$

The proofs of the other three assertions of the theorem follow by similar arguments and are omitted. □

The modification of the nonlinearity R δ results in the function \(u^*_s\) that defines the centre manifold having a uniformly small hyperbolic part. Namely, we have the following lemma.

Lemma I.5.3.1

Define \(\widehat P_c:\mathcal {P}\mathcal {C}^\eta \rightarrow \mathcal {P}\mathcal {C}^\eta (\mathbb {R},\mathcal {R}\mathcal {C}\mathcal {R}_c)\) by \(\widehat {P}_c\phi (t)=P_c(t)\phi (t)\) . If δ > 0 is sufficiently small, then \(||(I-\widehat {P}_c) u^*_s||{ }_0<\delta \) . More precisely, it is sufficient to chose δ > 0 small enough so that \(NL_\delta ||\mathcal {K}^\eta _s||<\frac {1}{4}\).

Proof

Recall that \(u^*_s\) satisfies the fixed-point equation \(u^*_s=U(\cdot ,s)\varphi +\mathcal {K}^\eta _s( R_{\delta ,s}(u^*_s))\). Thus, with \(\widehat {P}_h=I-\widehat {P}_c\),

$$\displaystyle \begin{aligned}\widehat{P}_h u^*_s = \widehat{P}_h\circ\mathcal{K}^\eta_s( R_{\delta,s}(u^*_s))\end{aligned}$$

because U(t, s) is an isomorphism of \(\mathcal {R}\mathcal {C}\mathcal {R}_c(s)\) onto \(\mathcal {R}\mathcal {C}\mathcal {R}_c(t)\) and \(\varphi {\in }\mathcal {R}\mathcal {C}\mathcal {R}_c(s)\). By Corollary I.5.1.2, we have for all \(t\in \mathbb {R}\) that \(|| R_{\delta ,s}(u^*_s(t))||\leq 4\delta L_\delta \), which implies \( R_{\delta ,s}(u^*_s)\in B^0(\mathbb {R},\mathbb {R}^n)\oplus B^0_{ t_k}(\mathbb {Z},\mathbb {R}^n)\). We obtain the claimed result by applying the second conclusion of Lemma I.5.1.3 and taking δ sufficiently small, recalling from Corollary I.5.1.1 that L δ → 0 as δ → 0. The explicit estimate for δ comes from the bound \(||\widehat P_h\mathcal {K}^\eta _s(R_{\delta ,s}(u_s^*)||{ }_0\leq N ||K^\eta _s||4\delta L_\delta .\)

4 Dynamics on the Centre Manifold

The centre manifold can be identified with a \(\dim (\mathcal {R}\mathcal {C}\mathcal {R}_c(t))\)-dimensional invariant fibre bundle over \(\mathbb {R}\times \mathcal {R}\mathcal {C}\mathcal {R}\). A natural question to ask is how the process \((S,\mathcal {M})\) behaves when restricted to the centre manifold. We address this now.

4.1 Integral Equation

On the centre manifold, components of mild solutions on the centre fibre bundle are decoupled. The following lemma states how the components in the centre fibre bundle evolve. The proof follows from Theorem I.5.3.1.

Lemma I.5.4.1 (Dynamics on the Centre Manifold: Integral Equation)

Let \(y:\mathbb {R}\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) satisfy \(y(t)\in \mathcal {W}_c(t)\) with ||y|| < δ. Consider the projection of y onto the centre fibre bundle: w(t) = P c(t)y(t). The projection satisfies the integral equation

$$\displaystyle \begin{aligned} w(t)&=U(t,s)w(s)+\int_s^t U(t,\mu)P_c(\mu)\chi_0 F_{\delta,\mu}(\mu,\mathcal{C}(\mu,w(\mu)))d\mu\\ &\quad + \sum_{s< t_i\leq t}U(t, t_i)P_c( t_i)\chi_0 G_{\delta,t_i}(i,\mathcal{C}( t_i,w( t_i))). \end{aligned} $$
(I.5.10)

4.2 Abstract Ordinary Impulsive Differential Equation

Lemma I.5.4.1 describes the dynamics of the centre fibre bundle component of the centre manifold in terms of an integral equation. With an additional assumption on the jump map, we can extend this result to an ordinary impulsive differential equation on a Banach space.

Definition I.5.4.1

A sequence of functionals \(J(k,\cdot ):\mathcal {R}\mathcal {C}\mathcal {R}\rightarrow \mathbb {R}^n\) for \(k\in \mathbb {Z}\) satisfies the overlap condition (with respect to the sequence \(\{t_k:k\in \mathbb {Z}\}\) of impulse times) if

$$\displaystyle \begin{aligned}\lim_{\epsilon\rightarrow 0^+}J(k,\phi+\chi_{[\theta,\theta+\epsilon)}h)=J(k,\phi)\end{aligned}$$

for all \(\phi \in \mathcal {R}\mathcal {C}\mathcal {R}\) and \(h\in \mathcal {R}\mathcal {C}\mathcal {R}\), whenever θ = t j − t k ∈ [−r, 0).

The overlap condition roughly states that the jump functional does not have observable “memory” at times in the past that happens to correspond to impulse times. As the definition is somewhat abstract, we will give an example.

Example I.5.4.1

Consider the scalar impulse effect defined according to

where \(t_k=k\in \mathbb {Z}\) are the integers. The functional associated to the above is simply J(ϕ) = ϕ(−r). If r is a positive integer, the overlap condition will not be satisfied because with θ = t k − t k+r = −r ∈ [−r, 0), we have

$$\displaystyle \begin{aligned}J(\phi+\chi_{[\theta,\theta+\epsilon)}h)=\phi(-r)+h(-r)\neq \phi(-r)=J(\phi)\end{aligned}$$

for all 𝜖 > 0 and \(h\in \mathcal {R}\mathcal {C}\mathcal {R}\) with h(−r) ≠ 0. However, if r is not an integer, then since any θ = t j − t k ∈ [−r, 0) must be an integer, it follows that for 𝜖 > 0 small enough, − r∉[θ, θ + 𝜖). From here, we can conclude that J satisfies the overlap condition.

Remark I.5.4.1

The overlap condition is equivalent to the statement that J(k, ⋅) admits a continuous extension to a particular closed subspace of \(\mathcal {G}\) \(([-r,0],\mathbb {R}^n)\); see later Lemma I.6.1.1.

The overlap condition is mostly important for functionals that define discrete delays, since the regularization incurred from distributed delays generally forces the overlap condition to be satisfied. See Sect. I.6.4 for a more thorough discussion. We make use of the overlap condition in proof of the following theorem. The details are somewhat subtle, and we will spend a fair bit more time on them in Sect. I.5.7.

Theorem I.5.4.1 (Dynamics on the Centre Manifold: Abstract Impulsive Differential Equation)

Let \(y\in \mathcal {R}\mathcal {C}\mathcal {R}^1(\mathbb {R},\mathbb {R}^n)\) satisfy \(y_t\in \mathcal {W}_c(t)\) with ||y|| < δ. Consider the projection w(t) = P c(t)y t and define the linear operators \(\mathcal {L}(t):\mathcal {R}\mathcal {C}\mathcal {R}^1\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) and \(\mathcal {J}(k):\mathcal {R}\mathcal {C}\mathcal {R}\rightarrow \mathcal {G}([-r,0],\mathbb {R}^n)\) by

(I.5.11)

If the jump functionals B(k) and g(k, ⋅) satisfy the overlap condition, then \(w:\mathbb {R}\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}^1\) satisfies, pointwise, the abstract impulsive differential equation

$$\displaystyle \begin{aligned}d^+ w(t)&=\mathcal{L}(t)w(t) + P_c(t)\chi_0 F_\delta(t,\mathcal{C}(t,w(t))),&t&\neq{{t}}_k \end{aligned} $$
(I.5.12)
$$\displaystyle \begin{aligned}\varDelta w( t_k)&=\mathcal{J}(k) w( t_k^-)+P_c( t_k)\chi_0 G_\delta(k,\mathcal{C}( t_k,w( t_k))),&t&= t_k, \end{aligned} $$
(I.5.13)

where \(w( t_k^-)(\theta ):=\lim _{\epsilon \rightarrow 0^-}w( t_k-\epsilon )(\theta )\) and \(\varDelta w( t_k)(\theta ):=w( t_k)(\theta )-w( t_k^-)(\theta )\) for θ ∈ [−r, 0].

Proof

For brevity, denote \(F(\mu )=F_{\delta ,\mu }(\mu ,\mathcal {C}(\mu ,w(\mu )))\), \(\overline F(\mu )=\chi _0 F(\mu )\), F(μ) = P c(μ)χ 0 F(μ), and analogously for G δ. We begin by noting that equation (I.5.10) allows us to write the finite difference w 𝜖(t) = w(t + 𝜖) − w(t) as

(I.5.14)

First, we show that \(d^+U(t,s)\phi =\mathcal {L}(t)U(t,s)\phi \) pointwise for \(\phi \in \mathcal {R}\mathcal {C}\mathcal {R}\). For θ = 0, we have

$$\displaystyle \begin{aligned} \frac{1}{\epsilon}(U(t+\epsilon,s)\phi(0)-U(t,s)\phi(0)) = \frac{1}{\epsilon}\int_t^{t+\epsilon}L(\mu)U(\mu,s)\phi d\mu, \end{aligned} $$

which converges to L(t)U(t, s)ϕ as 𝜖 → 0+. For θ < 0 and 𝜖 > 0 sufficiently small,

$$\displaystyle \begin{aligned} &\frac{1}{\epsilon}(U(t+\epsilon,s)\phi(\theta)-U(t,s)\phi(\theta)) \\ &= \frac{1}{\epsilon}(\phi(t+\epsilon+\theta-s)-\phi(t+\theta-s))\longrightarrow d^+\phi(t+\theta-s)=d^+ U(t,s)\phi(\theta). \end{aligned} $$

Therefore, \(d^+ U(t,s)\phi =\mathcal {L}(t)U(t,s)\phi \) pointwise, as claimed. Since U(t, t) = I, this also proves the pointwise convergence

$$\displaystyle \begin{aligned}\frac{1}{\epsilon}(U(t+\epsilon,t)-I)\phi\rightarrow\mathcal{L}(t)\phi.\end{aligned}$$

Next, we show that

$$\displaystyle \begin{aligned}\frac{1}{\epsilon}U(t+\epsilon,t)P_c(t)\int_t^{t+\epsilon}U(t,\mu)\overline F(\mu)d\mu\rightarrow P_c(t)\overline F(t)=\mathbf{F}(t)\end{aligned} $$
(I.5.15)

pointwise as 𝜖 → 0+. We do this by first proving that the sequence

$$\displaystyle \begin{aligned}x_n:=\frac{1}{\epsilon_n}U(t+\epsilon_n,t)P_c(t)\int_t^{t+\epsilon_n}U(t,\mu)\overline F(\mu)d\mu\end{aligned}$$

is pointwise Cauchy for each sequence 𝜖 n → 0+. Assuming without loss of generality that 𝜖 n is strictly decreasing, we have for all n ≥ m,

Both integrals can be made arbitrarily small in norm by taking n, m ≥ N and N large enough. Since \(\frac {1}{\epsilon }U(t+\epsilon ,t)\) is pointwise convergent as 𝜖 → 0+, we obtain that the sequence x n is pointwise Cauchy and is hence pointwise convergent. Direct calculation of the limit in the pointwise sense yields (I.5.15). Combining all of the above results with Eq. (I.5.14) gives the pointwise equality

$$\displaystyle \begin{aligned} d^+ w(t) &= \mathcal{L}(t)U(t,s)w(s) + \mathcal{L}(t)\int_s^t U(t,\mu)\mathbf{F}(\mu)d\mu \\ &\quad + \mathbf{F}(t) + \mathcal{L}(t)\sum_{s< t_i\leq t}U(t, t_i)\mathbf{G}(i)\\ &=\mathcal{L}(t)w(t) + \mathbf{F}(t),\end{aligned} $$

which is equivalent to (I.5.12).

To obtain the difference equation (I.5.13), we similarly identify w 𝜖(t k)(θ) := w(t k)(θ) − w(t k − 𝜖)(θ) with the decomposition

$$\displaystyle \begin{aligned} w_{\epsilon}( t_k)&=[U( t_k,s)-U( t_k-\epsilon,s)]w(s) + \int_{ t_k-\epsilon}^{ t_k}U(t,\mu)\mathbf{F}(\mu)d\mu \\ &+ \int_s^{ t_k-\epsilon}[U( t_k,\mu)-U( t_k-\epsilon,\mu)]\mathbf{F}(\mu)d\mu\\ &+\sum_{ t_k-\epsilon< t_i\leq{{t}}_k}U( t_k, t_i)\mathbf{G}(i) + \sum_{s< t_i\leq{{t}}_{k}-\epsilon}[U( t_k, t_i)-U( t_k-\epsilon, t_i)]\mathbf{G}(i). \end{aligned} $$

Using Lemmas I.2.2.1 and I.2.3.5, the above is seen to converge pointwise as 𝜖 → 0+, with limit

$$\displaystyle \begin{aligned} \varDelta w( t_k)&= \tilde{\mathcal{J}}(k) U( t_k^-,s)w(s) + \tilde{\mathcal{J}}(k)\int_s^{ t_k}U( t_k^-,\mu)\mathbf{F}(\mu)d\mu\\ &+ \mathbf{G}(k) + \tilde{\mathcal{J}}(k)\sum_{s< t_i< t_k}U( t_k^-, t_i)\mathbf{G}(i), \end{aligned} $$
(I.5.16)

where \( \tilde {\mathcal {J}}(k)\phi (\theta ) = \chi _0(\theta )B(k)\phi + \chi _{(-r,0)}(\theta )[\phi (\theta )-\phi (\theta ^-)]\), and we assume without loss of generality that r > 0 is large enough so that t k − r ≠ t j for all j < k and all \(k\in \mathbb {Z}\). Let us denote

$$\displaystyle \begin{aligned}U^-(t,s)\phi(\theta)=\lim_{\epsilon\rightarrow 0^+}U(t-\epsilon,s)\phi(\theta)\end{aligned}$$

the strong left-limit of the evolution family at t. This limit is well-defined pointwise, and due to the overlap condition, we have

$$\displaystyle \begin{aligned}\tilde{\mathcal{J}}(k)U( t_k^-,\xi)\phi=\mathcal{J}(k)U^-( t_k,\xi)\phi\end{aligned} $$
(I.5.17)

pointwise for all ξ < t k. Moreover, since

$$\displaystyle \begin{aligned}w( t_k^-)=U^-( t_k,s)w(s) + \int_s^{ t_k}U^-( t_k,\mu)\mathbf{F}(\mu)d\mu + \sum_{s< t_i< t_k}U^-( t_k, t_i)\mathbf{G}(i),\end{aligned} $$
(I.5.18)

we can obtain equation (I.5.13) by substituting (I.5.17) and (I.5.18) into (I.5.16). □

We will not make much use of the abstract differential equation (I.5.12)–(I.5.13) and have included it mostly for the purpose of comparison with analogous results for delay differential equations. As we will see, the integral equation (I.5.10) will be more than sufficient.

4.3 A Remark on Coordinates and Terminology

It is a slight abuse of terminology to describe (I.5.12)–(I.5.13) as an impulsive differential equation on the centre manifold. More precisely, it is the dynamical system associated to the projection onto the centre fibre bundle associated to a given solution in the centre manifold. This precise description is, however, quite verbose, and for this reason we will usually call (I.5.12)–(I.5.13) the impulsive differential equations on the centre manifold, even if this is not exactly what it is.

The evolution equations (I.5.12)–(I.5.13) are quite abstract. It is an evolution equation in the centre fibre bundle that, despite being (in many situations) finite-dimensional, is still rather difficult to use in practice because the fibres \(\mathcal {R}\mathcal {C}\mathcal {R}_c(t)\) are not themselves constant in time. What is needed is an appropriate coordinate system. This would in principle allow for the derivation of an impulsive differential equation in \(\mathbb {R}^p\) for \(p=\dim \mathcal {R}\mathcal {C}\mathcal {R}_c\). We expand on precisely this idea in Sects. I.5.7 and I.6.1.

5 Reduction Principle

Given a nonhyperbolic equilibrium, one may want to study the orbit structure near this equilibrium under parameter perturbation in the vector field or jump map defining the impulsive functional differential equation (I.4.1)–(I.4.2). Assuming the sufficient conditions for the existence of a centre manifold are satisfied, part 2 of Theorem I.5.3.1 implies that on the centre manifold, the dynamics are completely determined by those of the component in the centre fibre bundle. Part 3 of the same theorem guarantees that the small bounded solutions are all contained on the centre manifold. Lemma I.5.4.1 completely characterizes these dynamics in terms of an integral equation (I.5.10). As a consequence, bifurcations can be detected by analysing this integral equation instead, and no loss of generality occurs by looking only on the centre manifold (at least for small perturbations of the parameter).

The next natural question is the following. If we detect a bifurcation on the centre manifold and the branch of solutions (or union of solutions, e.g. a torus) is stable when restricted to the centre manifold, are we guaranteed that this solution is stable in the infinite-dimensional system provided \(\mathcal {R}\mathcal {C}\mathcal {R}_u\) is trivial? The answer is yes, and the following results make this precise. This is sometimes referred to as the centre manifold reduction. They are inspired by similar results for ordinary differential equations in both finite- and infinite-dimensional systems; see for instance Theorem 2.2 from Chapter 10 of Hale and Verduyn Lunel’s introductory text [58] for functional differential equations, Theorem 3.22 from Chapter 2 of [60] for ordinary differential equations in Banach spaces and the classic text of Jack Carr [22] for finite-dimensional ordinary differential equations, as well as some extensions to infinite-dimensional problems. However, we will require the vector field to be slightly more regular than previously.

Definition I.5.5.1

The functional \(f:\mathbb {R}\times \mathcal {R}\mathcal {C}\mathcal {R}\rightarrow \mathbb {R}^n\) is additive composite regulated (ACR) if for all \(x\in \mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},\mathbb {R}^n)\), \(Y\in \mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},\mathbb {R}^{n\times m})\) and \(z\in \mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},\mathbb {R}^m)\), the function tf(t, x t + Y t z(t)) is an element of \(\mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},\mathbb {R}^n)\).

Remark I.5.5.1

ACR functionals are quite common in applications. For example, suppose \(f:\mathbb {R}\times \mathcal {R}\mathcal {C}\mathcal {R}\rightarrow \mathbb {R}^n\) can be written in the form

$$\displaystyle \begin{aligned} f(t,\phi) = F\left(t,A(t)\phi(-d(t)),\int_{-r}^0 K(t,\theta)\phi(\theta)d\theta\right) \end{aligned} $$

for \(d\in \mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},[-r,0])\), \(A\in \mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},\mathbb {R}^{n\times n})\), \(K:\mathbb {R}\times [-r,0]\rightarrow \mathbb {R}^{n\times n}\) integrable in its second variable, continuous from the right in its first variable and uniformly bounded and \(F:\mathbb {R}\times \mathbb {R}^n\times \mathbb {R}^n\rightarrow \mathbb {R}^n\) jointly continuous from the right in its first variable and continuous in its other variables. It is clear that

$$\displaystyle \begin{aligned}t\mapsto A(t)[x_t(-d(t))+Y_t(-d(t))z(t)]=A(t)[x(t-d(t))+Y(t-d(t))z(t)]\end{aligned}$$

is an element of \(\mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},\mathbb {R}^n)\). As for the integral term, the function

$$\displaystyle \begin{aligned}t\mapsto \int_{-r}^0 K(t,\theta)[x(t+\theta)+Y(t+\theta)z(t)]d\theta]\end{aligned}$$

can be seen to be an element of \(\mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},\mathbb {R}^n)\) by applying the dominated convergence theorem. From the assumptions on F, we conclude that f is ACR. The same holds true for vector fields with multiple time-varying delays and distributed delays.

Lemma I.5.5.1

Assume \(\mathcal {R}\mathcal {C}\mathcal {R}_u=\{0\}\) . Let \(\varPhi _t=[\begin {array}{ccc}\phi _t^{(1)}&\cdots &\phi _t^{(p)}\end {array}]\) be a row array whose elements form a basis for \(\mathcal {R}\mathcal {C}\mathcal {R}_c(t)\) , the latter being assumed p-dimensional for p finite, such that Φ t = U c(t, 0)Φ 0 . Given a mild solution \(x_{(\cdot )}:I\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) , write P c(t)x t = Φ t z(t) for some \(z\in \mathbb {R}^p\) , so that

$$\displaystyle \begin{aligned}x_t=\varPhi_t z(t) + h(t,z(t)) + y_t^S\end{aligned}$$

with \(z\in \mathbb {R}^p\), \(h(t,z):=(I-P_c(t)\mathcal {C}(t,\varPhi _t z)\) , and \(y_t^S\in \mathcal {R}\mathcal {C}\mathcal {R}_s(t)\) is a remainder term. Assume the matrix-valued function Y c(t) defined by the equation P c(t)χ 0 = Φ t Y c(t) is continuous from the right and possesses limits on the left. There exist positive constants ρ, C and α such that for all t  s, the remainder term satisfies

$$\displaystyle \begin{aligned} ||y_t^S||&\leq C||y_s^S - h(s,z(s))||e^{-\alpha(t-s)}, \end{aligned} $$

provided ||x t||≤ ρ for all t  s.

Proof

One can carefully verify that z(t) and \(y_t^S\), respectively, satisfy the following integral equations for all t ≥ s:

$$\displaystyle \begin{aligned} z(t)&=z(s)+\int_s^t Y_c(\mu) \mathcal{F}(\mu,z(\mu),y_\mu^S)d\mu + \sum_{s<t_i\leq t}U_c(t_i)\mathcal{G}(i,z(t_i),y_{t_i}^S), \end{aligned} $$
(I.5.19)
(I.5.20)

provided ρ < δN, where \(\mathcal {F}(t,z,y)=F_{\delta ,0}(t,\varPhi _t z + h(t,z)+y)\), \(\mathcal {G}(k,z,y)=G_{\delta ,0}(k,\varPhi _{t_k}z+h(t_k,z)+y)\), and Y c(μ) is defined by the equation P c(μ)χ 0 = Φ μ Y c(μ). Because of our assumption on Y c(μ), it follows (from the integral equation (I.5.19)) that z is continuous from the right and possesses limits on the left. If we remark that

$$\displaystyle \begin{aligned}y_t^S = (I-P_c(t))x_t = x_t - \varPhi_t z(t),\end{aligned}$$

we can use Lemma I.1.3.7 to conclude that \(t\mapsto ||y_t^S||\) is an element of \(\mathcal {R}\mathcal {C}\mathcal {R}(I,\mathbb {R})\). Using spectral separation and the Lipschitz condition on the substitution operator, we can use (I.5.20) to get the estimate

$$\displaystyle \begin{aligned} & ||y_t^S||e^{-at}\leq Ke^{-as}||y_s^S-h(s,z(s))|| + \int_s^t KL_\delta||y_\mu^S||e^{-a\mu}d\mu \\ &\quad + \sum_{s<t_i\leq t}KL_\delta ||y_{t_i}^S||e^{-at_i}, \end{aligned} $$

provided ||x t||≤ ρ for ρ sufficiently small. Next, we apply the Gronwall Inequality (Lemma I.1.5.1) to the function \(t\mapsto ||y_t^S||e^{-at}\). After some simplifications, we get

$$\displaystyle \begin{aligned} ||y_t^S||&\leq K(1+KL_\delta)||y_s^S-h(s,z(s))||\exp\left(\left(a+KL_\delta\left(1+\frac{1}{\xi}\right)\right)(t-s)\right). \end{aligned} $$

We can always guarantee that the exponential convergence rate is in the form e α(ts) for α > 0 by taking δ sufficiently small, since a < 0 and we have L δ → 0 as δ → 0 by Corollary I.5.1.1. The result follows. □

The continuity condition on the matrix tY c(t) comes up in a few places in this monograph. Most noteworthy, it is used in Sect. I.5.7 to guarantee temporal regularity properties of the centre manifold.

Theorem I.5.5.1 (Local Attractivity of the Centre Manifold)

Let the assumptions of Lemma I.5.5.1 be satisfied and let f be an ACR functional. There exists a neighbourhood V  of \(0\in \mathcal {R}\mathcal {C}\mathcal {R}\) and positive constants K 1, α 1 such that if tx t is a mild solution satisfying x t ∈ V  for all t  s, then there exists \(u_t\in \mathcal {W}_c(t)\) with the property that

$$\displaystyle \begin{aligned}||x_t-u_t||\leq K_1e^{-\alpha_1 (t-s)}\end{aligned}$$

for all t  s. That is, every solution that remains close to the centre manifold in forward time is exponentially attracted to a particular solution on the centre manifold. More precisely, there exists \(u\in \mathcal {R}\mathcal {C}\mathcal {R}([s,\infty ),\mathbb {R}^n)\) such that tΦ t u(t) satisfies the abstract integral equation (I.5.10) for the coordinate on the centre manifold, and we have the estimates

$$\displaystyle \begin{aligned} ||P_c(t)x_t - \varPhi_t u(t)||&\leq Ke^{-a_1(t-s)},\\ ||P_s(t)x_t - h(t,u(t))||&\leq Ke^{-a_1(t-s)}. \end{aligned} $$

Proof

With the same setup as in the previous proof, let u(t;u s) for t ≥ s denote the solution of the integral equation

$$\displaystyle \begin{aligned} u(t)&=u_s+\int_s^t Y_c(\mu)\mathcal{F}(\mu,u(\mu),0)d\mu + \sum_{s<t_i\leq t}Y_c(t_i)\mathcal{G}(i,u(t_i),0), \end{aligned} $$

for given \(u_s\in \mathbb {R}^p\). Define w(t) = z(t) − u(t;u s). With \(x_t=\varPhi _t z(t)+h(t,z(t))+y_t^S\), we have the following integral equations for w and \(y_t^S\):

$$\displaystyle \begin{aligned} y_t^S&=U(t,s)[y_s^S-h(s,w(s)+u_s)] + \int_s^t U(t,\mu)P_s(\mu)\chi_0M_1(\mu,w(\mu) \end{aligned} $$
(I.5.21)
(I.5.22)

with M 1, M 2, N 1 and N 2 defined by

$$\displaystyle \begin{aligned} M_1(\mu,a,b)&=\mathcal{F}(\mu,a,b)-\mathcal{F}(\mu,a,0),\\ M_2(i,a,b)&=\mathcal{G}(i,a,b)-\mathcal{G}(i,a),\\ N_1(\mu,a,b)&=\mathcal{F}(\mu,a+u(\mu;u_s),b)-\mathcal{F}(\mu,u(\mu;u_s),0),\\ N_2(i,a,b)&=\mathcal{G}(i,a+u(t_i;u_s),b)-\mathcal{G}(i,u(t_i;s),0). \end{aligned} $$

The idea now is to reinterpret the integral equation for w as a fixed-point equation parameterized by \(y_{(\cdot )}^S\) and u(⋅;u s). Introduce the space

$$\displaystyle \begin{aligned}X=\{\phi\in\mathcal{R}\mathcal{C}\mathcal{R}([s,\infty),\mathbb{R}^p) : ||\phi(t)||e^{a(t-s)}\leq K\}\end{aligned}$$

equipped with the norm ||ϕ|| =supts||ϕ(t)||e a(ts). Define Tw by

$$\displaystyle \begin{aligned} (Tw)(t)&=-\int_t^\infty Y_c(\mu)N_1(\mu,w(\mu),y_\mu^S)d\mu - \sum_{t<t_i<\infty}Y_c(t_i)N_2(i,w(t_i),y_{t_i}^S). \end{aligned} $$
(I.5.23)

If w ∈ X, then from the assumption that f is an ACR functional we can conclude that \(Tw\in \mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},\mathbb {R}^n)\). So we consider the nonlinear function \(T:X\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},\mathbb {R}^n)\). Notice that if w is a fixed point of T, then w satisfies the integral equation (I.5.22). Working backwards, it would then follow by Lemma I.5.5.1 that

$$\displaystyle \begin{aligned}v_t:=\varPhi_t[w(t)+u(t;u_s)]+h(t,w(t)+u(t;u_s))+y_t^S\end{aligned} $$
(I.5.24)

is a solution with the property that

$$\displaystyle \begin{aligned} ||P_c(t)v_t-\varPhi_t u(t;u_s)]||&=O(e^{-\gamma t}),\\ ||P_s(t)v_t - h(t,u(t;u_s))||&=O(e^{-\gamma t}), \end{aligned} $$

as t → (recall that if w ∈ X, then w → 0 exponentially as t →, while h(t, ⋅) is uniformly Lipschitz with respect to t). It is at this stage that we refer the reader to the proof of Theorem 2 of Carr’s book [22]. The setup having been completed, the proof that T can be made a contraction on X provided δ is sufficiently small is the same as Carr’s argument and is omitted. Specifically, we have the following conclusion: for \(s\in \mathbb {R}\) and any \((u_s,y_s^S)\) is sufficiently small, T : X → X is a contraction. In particular, by making this dependence on the fixed point explicit and writing \(T:(\mathbb {R}\times \mathbb {R}^p\times \mathcal {R}\mathcal {C}\mathcal {R}_s(s))\times X\rightarrow X\), one can ensure that T is a uniform contraction. In the same way we proved that the centre manifold is (uniformly in t) Lipschitz continuous, one can show that the fixed point S (s, u, y S) of T(s, u, y S) is uniformly (with respect to s) Lipschitz continuous in \(\mathbb {R}^p\times \mathcal {R}\mathcal {C}\mathcal {R}_s(s)\), and the Lipschitz constant can be made as small as needed by taking δ sufficiently small.

Now, for a given \(\phi \in \mathcal {R}\mathcal {C}\mathcal {R}\), define u s(ϕ) and \(y_s^S(\phi )\) according to

Next, define \(Q(s,\cdot ,\cdot ):\mathbb {R}^p\times \mathcal {R}\mathcal {C}\mathcal {R}_s(s)\rightarrow \mathbb {R}^p\times \mathcal {R}\mathcal {C}\mathcal {R}_s(s)\) by

$$\displaystyle \begin{aligned}Q(s,u,\phi)=(u,\phi)+(\varPhi_s S^*(s,u_s(\phi),y_s^S(\phi)),0).\end{aligned}$$

That is, Q(s, ⋅, ⋅) is a nonlinear perturbation from the identity. If we let \(\psi \in \mathcal {R}\mathcal {C}\mathcal {R}\), then the function \(Q_\psi (s,\cdot ,\cdot ):\mathbb {R}^p\times \mathcal {R}\mathcal {C}\mathcal {R}_s(s)\rightarrow \mathbb {R}^p\times \mathcal {R}\mathcal {C}\mathcal {R}_s(s)\) defined by

$$\displaystyle \begin{aligned}Q_\psi(s,u,\phi)=(u_s(\psi),\psi-\varPhi_s u_s(\psi))-(\varPhi_s S^*(s,u_s(\phi),y_s^S(\phi)),0)\end{aligned}$$

satisfies the property that Q(s, u, ϕ) = (ψ 1, ψ 2) if and only if

$$\displaystyle \begin{aligned}Q_{\varPhi_s\psi_1 + \psi_2}(s,u,\phi)=(u,\phi).\end{aligned}$$

S (s, ⋅, ⋅) is (uniformly in s) Lipschitz continuous with a Lipschitz constant that goes to zero as δ → 0. Since ψ does not factor into the nonlinear term, Q ψ can be made a uniform (with respect to s and ψ) contraction by taking δ sufficiently small. As a consequence, every \((\psi _1,\psi _2)\in \mathbb {R}^p\times \mathcal {R}\mathcal {C}\mathcal {R}_s(s)\) is in the range of Q(s, ⋅, ⋅) (in fact, Q(s, ⋅, ⋅ is a bijection).

Now, let x t defined for t ≥ s be a mild solution with ||x t|| for t ≥ s sufficiently small. Write \(x_s=\varPhi _s x_s^c + x_s^S\) for \(x_s^c\in \mathbb {R}^p\) and \(x_s^S\in \mathcal {R}\mathcal {C}\mathcal {R}_s(s)\). Denote \((v_s^c,v_s^S)=Q^{-1}(s,\cdot ,\cdot )(x_s^c,x_s^S)\). Take note that \(v_s^S=x_s^S\). From the above discussion, it follows that with \(u(t)=u(t;v_s^c)\), the asymptotic of the theorem is satisfied. By restricting to a sufficiently small neighbourhood of the origin, we can ignore the cutoffs on the vector field and jump map, thereby obtaining results that are applicable to mild solutions of the system without the cutoff nonlinearity. This proves the theorem. □

5.1 Parameter Dependence

The following heuristic discussion of parameter-dependent centre manifolds will be a bit imprecise. See Sect. I.8.1 for a more concrete presentation.

Suppose the (parameter-dependent) process \(S(t,s;\epsilon ):\mathcal {R}\mathcal {C}\mathcal {R}\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) is generated by a parameter-dependent impulsive RFDE with parameter \(\epsilon \in \mathbb {R}^m\), and at 𝜖 = 0, the equilibrium 0 is nonhyperbolic with a p-dimensional centre manifold. One then considers the spatially extended process on \(\mathcal {R}\mathcal {C}\mathcal {R}\times \mathbb {R}^m\) defined by

$$\displaystyle \begin{aligned}(\phi,\epsilon)\mapsto (S(t,s;\epsilon)\phi,\epsilon)\end{aligned}$$

\(0\in \mathcal {R}\mathcal {C}\mathcal {R}\times \mathbb {R}^m\) is now nonhyperbolic with a (p + m)-dimensional centre fibre bundle, so that the function \((x,\epsilon )\mapsto \mathcal {C}(t,x,\epsilon )\) defines a (p + m)-dimensional centre manifold. The dynamics on this centre manifold are trivial in the 𝜖 component, while those in the x component depend for each 𝜖 fixed on \(x\mapsto \mathcal {C}(t,x,\epsilon )\).

For small parameters 𝜖 ≠ 0, there may be small solutions in the parameter-dependent centre manifold \(\mathcal {W}_c^\epsilon \) defined by

$$\displaystyle \begin{aligned}\mathcal{W}_c^\epsilon(t)=\{\mathcal{C}(t,x,\epsilon):x\in\mathcal{R}\mathcal{C}\mathcal{R}_c(t)\}\end{aligned}$$

that are locally asymptotically stable when restricted to \(\mathcal {W}_c^\epsilon \). There could also be stable attractors therein—in particular (by Theorem I.5.3.1), any small bounded solutions are contained in the centre manifold. The stability condition in addition to continuity with respect to initial conditions (Theorem I.4.2.1) and attractivity of the centre manifold (Theorem I.5.5.1) then grants the analogous stability of such small solutions or attractors when considered in the scope of the original infinite-dimensional system (I.4.1)–(I.4.2), provided 𝜖 is small enough and \(\mathcal {R}\mathcal {C}\mathcal {R}_u\) is trivial.

To summarize, when the unstable fibre bundle is trivial, the dynamics on the centre manifold completely determine all nearby dynamics. Local stability assertions associated to small solutions and attractors on the parameter-dependent centre manifold carry over to those of the original infinite-dimensional system. The parameter-dependent centre manifold contains all such small solutions and attractors.

6 Smoothness in the State Space

In Sect. I.5.2, we proved the existence of invariant centre manifolds associated to the abstract integral equation (I.4.3). These invariant manifolds are images of a uniformly Lipschitz continuous function \(\mathcal {C}:\mathcal {R}\mathcal {C}\mathcal {R}_c\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\). Our next task is to prove that the function \(\mathcal {C}\) inherits the smoothness from the generating impulsive functional differential equation. To accomplish this, we will need to introduce an additional regularity assumption on the nonlinear parts of the vector field and jump map.

  1. H.8

    The functions c j and sequences \(\{d_j(k):k\in \mathbb {Z}\}\) introduced in H.5 are bounded.

Note that H.8 is a purely nonautonomous property and is trivially satisfied if the vector field and jump functions are autonomous. Also, we will need to assume in this section that the centre fibre bundle is finite-dimensional.

  1. H.9

    \(\mathcal {R}\mathcal {C}\mathcal {R}_c\) is finite-dimensional.

6.1 Contractions on Scales of Banach Spaces

The rest of this section will utilize several techniques from the theory of contraction mappings on scales of Banach spaces. In particular, many of the proofs that follow are inspired by those relating to smoothness of centre manifolds appearing in [41, 71, 144], albeit adapted somewhat so as to manage the explicitly nonautonomous and impulsive properties of the problem. The following lemma will be very helpful. It is taken from Section IX, Lemma 6.7 of [41], but also appears as Theorem 3 in [144].

Lemma I.5.6.1

Let Y 0, Y, Y 1 be Banach spaces with continuous embeddings J 0 : Y 0Y  and J : Y Y 1 , and let Λ be another Banach space. Consider the fixed-point equation y = f(y, λ) for f : Y × Λ  Y . Suppose the following conditions hold.

  1. b1)

    The function g : Y 0 × Λ  Y 1 defined by (y 0, λ)↦g(y 0, λ)=Jf(J 0 y 0, λ) is of class C 1 , and there exist mappings

    $$\displaystyle \begin{aligned} f^{(1)}&:J_0Y_0\times\varLambda\rightarrow\mathcal{L}(Y),\\ f_1^{(1)}&:J_0Y_0\times\varLambda\rightarrow\mathcal{L}(Y_1). \end{aligned} $$

    such that D 1 g(y 0, λ)ξ = Jf (1)(J 0 y 0, λ)J 0 ξ for all (y 0, λ, ξ) ∈ Y 0 × Λ × Y 0 and \(Jf^{(1)}(J_0y_0,\lambda )y=f_1^{(1)}(J_0y_0,\lambda )Jy\) for all (y 0, λ, y) ∈ Y 0 × Λ × Y .

  2. b2)

    There exists κ ∈ [0, 1) such that f(⋅, λ) : Y  Y  is Lipschitz continuous with Lipschitz constant κ, and each of f (1)(⋅, λ) and \(f^{(1)}_1(\cdot ,\lambda )\) is uniformly bounded by κ.

  3. b3)

    Under the previous condition, the unique fixed point Ψ : Λ  Y  satisfying the equation Ψ(λ) = f(Ψ(λ), λ) itself satisfies Ψ = J 0 ∘ Φ for some continuous Φ : Λ  Y 0.

  4. b4)

    f 0 : Y 0 × Λ  Y  defined by (y 0, λ)↦f 0(y 0, λ) = f(J 0 y 0, λ) has a continuous partial derivative

    $$\displaystyle \begin{aligned}D_2 f:Y_0\times\varLambda\rightarrow\mathcal{L}(\varLambda,Y).\end{aligned}$$
  5. b5)

    The mapping (y, λ)↦J  f (1)(J 0 y, λ) from Y 0 × Λ into \(\mathcal {L}(Y,Y_1)\) is continuous.

Then, the mapping J  Ψ is of class C 1 and \(D(J\circ \varPsi )(\lambda )=J\circ \mathcal {A}(\lambda )\) for all λ  Λ, where \(\mathcal {A}=\mathcal {A}(\lambda )\) is the unique solution of the fixed-point equation A = f (1)(Ψ(λ), λ)A + D 2 f 0(Φ(λ), λ).

The reason we will need this lemma is because substitution operators such as \(R_\delta :\mathcal {P}\mathcal {C}^{\eta ,s}\rightarrow PC^{\eta ,s}(\mathbb {R},\mathbb {R}^n)\oplus B_{t_k}^{\eta ,s}(\mathbb {Z},\mathbb {R}^n)\) defined in Corollary I.5.1.1, though Lipschitz continuous, are generally not differentiable. The surprising result is that if one instead considers the codomain to be \(PC^{\zeta ,s}(\mathbb {R},\mathbb {R}^n)\oplus B_{t_k}^{\zeta ,s}(\mathbb {Z},\mathbb {R}^n)\) for some ζ > η, then the substitution operator becomes differentiable. Since X η-type spaces admit continuous embeddings \(J:X^{\eta _1}\hookrightarrow X^{\eta _2}\) whenever η 1 ≤ η 2, the centre manifold itself can be considered to be embedded in any appropriate weighted Banach space with high enough exponent η. An appropriate application of Lemma I.5.6.1 applied to the defining fixed-point equation (I.5.7) of the centre manifold will allow us to prove that a composition of the embedding operator with the fixed point is a C 1 function. An inductive argument will ultimately get us to C m smoothness.

6.2 Candidate Differentials of the Substitution Operators

Recall the definition of the modified nonlinearities

$$\displaystyle \begin{aligned} F_{\delta,s}(t,x)&=f(t,x)\xi\left(\frac{||P_c(s)x||}{N\delta}\right)\xi\left(\frac{||(P_s(s)+P_u(s))x||}{N\delta}\right)\\ G_{\delta,s}(k,x)&=g(k,x_{0^-})\xi\left(\frac{||P_c(s)x_{0^-}||}{N\delta}\right)\xi\left(\frac{||(P_s(s)+P_u(s))x_{0^-}||}{N\delta}\right). \end{aligned} $$

Since s is fixed, we may assume without loss of generality that ||⋅|| is smooth on \(\mathcal {R}\mathcal {C}\mathcal {R}_c(0)\setminus \{0\}\). We introduce a symbolic modification of the fixed-point operator;

$$\displaystyle \begin{aligned}\mathcal{G}_\delta^{\eta,s}:\mathcal{P}\mathcal{C}^{\eta,s}\times\times\mathcal{R}\mathcal{C}\mathcal{R}_c(s)\rightarrow\mathcal{P}\mathcal{C}^{\eta,s}\end{aligned}$$

defined in the same way as equation (I.5.7). The only difference here is that wish to make the dependence on η, s and δ explicit. We denote the associated fixed point by \(\tilde u_{\eta ,s}\), provided δ > 0 is sufficiently small.

From this point on, our attention shifts to proving the smoothness of \(\tilde u_{\eta ,s}:\mathcal {R}\mathcal {C}\mathcal {R}_c(s)\rightarrow \mathcal {P}\mathcal {C}^{\eta ,s}\) as defined by the fixed point of (I.5.7). We begin with some notation. Define \(\mathcal {P}\mathcal {C}^\infty = \cup _{\eta >0}\mathcal {P}\mathcal {C}^\eta \). Let

$$\displaystyle \begin{aligned}V^\eta=\{u\in \mathcal{P}\mathcal{C}^\eta : ||(I-\widehat{P}_c) u||{}_0<\infty\},\end{aligned}$$

where \(\widehat {P}_c\) is the projection operator from Lemma I.5.3.1. Equipped with the norm

$$\displaystyle \begin{aligned}||u||{}_{V^{\eta,s}}=||\mathcal{P}_c u||{}_{\eta,s}+ ||(I-\mathcal{P}_c) u||{}_0,\end{aligned}$$

the space V η, s is complete, where the s-shifted definitions are as outlined at the beginning of Sect. I.5.1.1.

Let δ > 0 be chosen as in Lemma I.5.3.1, define

$$\displaystyle \begin{aligned}V^\eta_\delta=\{u\in V^\eta : ||(I-\widehat{P}_c)u||{}_0<\delta\}\end{aligned}$$

and define \(V^\eta _\delta (t)\subset \mathcal {R}\mathcal {C}\mathcal {R}\) by \(V^\eta _\delta (t)=\{u(t):u\in V^\eta _\delta \}\). Also, define the set \(V^\infty _\delta =\cup _{\eta >0}V^\eta _\delta \). Set \(B^\eta =PC^\eta (\mathbb {R},\mathbb {R}^n)\oplus B^\eta _{{{t}}_k}(\mathbb {Z},\mathbb {R}^n)\) and B  = ∪η>0 B η. Finally, the bounded p-linear maps from X 1 ×⋯ × X p to Y  for Banach spaces X i and Y  will be denoted as \(\mathcal {L}^p(X_1\times \cdots X_p,Y)\). We will denote it as \(\mathcal {L}^p\) if there is no confusion.

By construction of the modified nonlinearity R δ,s and the choice of δ from Lemma I.5.3.1, the functions uF δ,s(t, u) and uG δ,s(k, u) are C m on \(V^\eta _\delta (t)\) and \(V^\eta _\delta ({{t}}_k)\), respectively, for all \(t\in \mathbb {R}\) and \(k\in \mathbb {Z}\). We are therefore free to define

for 1 ≤ p ≤ m, where D p denotes the pth Fréchet derivative with respect to the second variable. For each \(u\in V^\infty _\delta \), we can define a p-linear map \(\tilde R_{\delta ,s}^{(p)}(u):\mathcal {P}\mathcal {C}^\infty \times \cdots \times \mathcal {P}\mathcal {C}^\infty \rightarrow B^\infty \) by the equation

$$\displaystyle \begin{aligned} \tilde R_{\delta,s}^{(p)}(u)(v_1,\dots,v_p)(t,k)=(F_{\delta,s}^{(p)}u(t)(v_1(t),\dots,v_p(t)),G_{\delta,s}^{(p)}u(k)(v_1(t_k),\dots,v_p(t_k))). \end{aligned} $$
(I.5.25)

For p = 0, we define \(\tilde R_{\delta ,s}^{(0)}=R_{\delta ,s}\).

6.3 Smoothness of the Modified Nonlinearity

In this section we elaborate on various properties of the substitution operator R δ,s and its formal derivative \(\tilde R_{\delta ,s}^{(p)}\) introduced in equation (I.5.25). The first thing we need to do is extend condition H.5 to the modified nonlinearities.

Lemma I.5.6.2

For j = 1, …, m, there exist constants \(\tilde c_j,\tilde d_j,\tilde q>0\) such that

$$\displaystyle \begin{aligned} ||D^j\tilde F_{\delta,s}(t,\phi)-D^j\tilde F_{\delta,s}(t,\psi)||&\leq \tilde c_j||\phi-\psi||,&||D^j\tilde F_{\delta,s}(t,\phi)||&\leq \tilde q\tilde c_j&\phi,\psi&\in V_\delta^\infty(t)\\ ||D^j\tilde G_{\delta,s}(k,\phi){-}D^j \tilde G_{\delta,s}(k,\psi)||&\leq \tilde d_j||\phi{-}\psi||,\!\!&||D^j\tilde G(k,\phi)||&\leq \tilde q\tilde d_j&\phi,\psi&\in V_{\delta,s}^\infty({{t}}_k). \end{aligned} $$

Proof

We prove only the Lipschitzian property for D j F δ,s, since the boundedness and corresponding results for D j G δ,s are proven similarly. Denote

$$\displaystyle \begin{aligned}X(s,\phi)=\xi\left(\frac{||P_c(s)\phi||}{N\delta}\right)\xi\left(\frac{||(I-P_c(s))\phi||}{N\delta}\right).\end{aligned}$$

When \(\phi ,\psi \in V^\infty _\delta (t)\), X is m-times continuously differentiable and its derivative is globally Lipschitz continuous. Moreover, the Lipschitz constant can be chosen independent of s because of the uniform boundedness (property 1) of the projection operators. Let \(\mbox{Lip}^k_X\) denote the Lipschitz constant for D k X(s, ⋅). Then,

$$\displaystyle \begin{aligned} &D^j \tilde F_{\delta,s}(t,\phi)-D^j\tilde F_{\delta,s}(t,\psi)=D^j\left[f(t,\phi)X(s,\phi)-f(t,\psi)X(s,\phi) \right]\\ &=\sum_{N_1,N_2\in P_2(j)}D^{\# N_1}f(t,\phi)D^{\#N_2}X(s,\phi)-D^{\# N_1}f(t,\psi)D^{\#N_2}X(s,\psi)\\ &=\sum_{N_1,N_2\in P_2(j)}D^{\# N_1}[f(t,\phi)-f(t,\psi)]D^{\# N_2}X(s,\phi)\\ &+ D^{\# N_1}f(t,\psi)D^{\# N_2}[X(s,\phi)-X(s,\psi)], \end{aligned} $$

where P 2(j) denotes the set of partitions of length two from the set {1, …, j} and #Y  is the cardinality of Y . Restricted to the ball B 2δ(0), the Lipschitz constants for D j f(t, ⋅) and the boundedness estimates from H.5 then imply the estimate

$$\displaystyle \begin{aligned}||D^j\tilde F_{\delta,s}(t,\phi)-D^j\tilde F_{\delta,s} (t,\psi)||\leq\left(\sum_{N_1,N_2\in P_2(j)}(1+q)c_{\# N_1}(t)\mbox{Lip}_X^{\# N_2} \right)||\phi-\psi||.\end{aligned}$$

As each of c j and d j are bounded, the Lipschitz constant admits an upper bound. Outside of B 2δ(0), X and all of its derivatives are identically zero. □

Lemma I.5.6.3

Let 1 ≤ p  m, μ i > 0 for i = 1, …, p, μ = μ 1 + ⋯ + μ p and η  μ. Then we have \(\tilde R^{(p)}_{\delta ,s}(u)\in \mathcal {L}^p(\mathcal {P}\mathcal {C}^{\mu _1}\times \cdots \times \mathcal {P}\mathcal {C}^{\mu _p},B^\eta )\) for all \(u\in V^\infty _\delta \) , with

$$\displaystyle \begin{aligned}||\tilde R^{(p)}_{\delta,s}(u)||{}_{\mathcal{L}^p}&\leq \sup_{t\in\mathbb{R}}||\tilde F_{\delta,s}^{(p)}u(t)||e^{-(\eta-\mu)|t|}+\sup_{k\in\mathbb{Z}}||\tilde G_{\delta,s}^{(p)}u(k)||e^{-(\eta-\mu)|{{t}}_k|} \\ &=||\tilde R_{\delta,s}^{(p)}(u)||{}_{\eta-\mu}. \end{aligned} $$

Also, \(u\mapsto \tilde R_{\delta ,s}^{(p)}(u)\) is continuous as a mapping \(\tilde R_{\delta ,s}^{(p)}:V^\sigma _\delta \rightarrow \mathcal {L}^p(\mathcal {P}\mathcal {C}^{\mu _1}\times \cdots \times \mathcal {P}\mathcal {C}^{\mu _p},B^\eta )\) if η > μ, for all σ > 0.

Proof

For brevity, denote \(\tilde R_\delta =\tilde R_{\delta ,s}\), and similarly for \(\tilde F\) and \(\tilde G\). It is easy to verify that \(\tilde R_\delta ^{(p)}(u)\) is p-linear. For boundedness,

$$\displaystyle \begin{aligned} &||\tilde R_\delta^{(p)}(u)||{}_{\mathcal{L}^p}=\sup_{\substack{t\in\mathbb{R},k\in\mathbb{Z}\\ ||v||{}_{\vec\mu}=1}}||\tilde F_\delta^{(p)}u(t)(v(t))||e^{-\eta|t|} + ||\tilde G_\delta^{(p)}u(k)(v(t_k))||e^{-\eta|t_k|}\\ &\leq\sup_{\substack{t\in\mathbb{R}\\ ||v||{}_{\vec\mu}=1}}||\tilde F_\delta^{(p)}u(t)(v(t))||e^{-\eta|t|} + \sup_{\substack{k\in\mathbb{Z} \\ ||w||{}_{\vec\mu}=1}}||\tilde G_\delta^{(p)}u(k)(w(t_k))||e^{-\eta|t_k|}\\ &\leq \sup_{\substack{t\in\mathbb{R}\\||v||{}_{\vec\mu}=1}}||\tilde F_\delta^{(p)}u(t)||\cdot\!\left(\prod_j||v_j(t)||\right)\!e^{-\eta|t|}{+}\sup_{\substack{k\in\mathbb{Z} \\ ||w||{}_{\vec\mu}=1}}||\tilde G_\delta^{(p)}u(k)||\cdot\!\left(\prod_j||w_j(t_k)||\right)\!e^{-\eta|t_k|}\\ &=\sup_{t\in\mathbb{R}}||\tilde F_\delta^{(p)}u(t)||e^{-(\eta-\mu)|t|}+\sup_{k\in\mathbb{Z}}||\tilde G_\delta^{(p)}u(k)||e^{-(\eta-\mu)|t_k|}, \end{aligned} $$

where \(||v||{ }_{\vec \mu =1}\) is the set of all \(v=(v_1,\dots ,v_p)\in \mathcal {P}\mathcal {C}^{\mu _1}\times \cdots \times \mathcal {P}\mathcal {C}^{\mu _p}\) such that \(||v_i||{ }_{\mu _i}=1\) for i = 1, …, p. The latter term in the inequality is finite by Lemma I.5.6.2 whenever η ≥ μ. In particular, the latter lemma implies that for all \(\phi \in V^\infty _\delta \), one has \(\sup _{t\in \mathbb {R}}||D^j \tilde F_\delta (t,\phi (t))||\leq \tilde q\tilde c_j\), and similar for \(\tilde G_k\). This uniform boundedness can then be used to prove the continuity of \(u\mapsto \tilde R_\delta ^{(p)}(u)\) when η > μ; the proof follows that of [Lemma 7.3 [71]] and is omitted here. □

The proofs of the following lemmas are essentially identical to the proofs of [Corollary 7.5, Corollary 7.6, Lemma 7.7 [71]] and are omitted.

Lemma I.5.6.4

Let η 2 > kη 1 > 0, 1 ≤ p  k. Then, \(\tilde R_{\delta ,s}:V_{\delta }^{\eta _1}\rightarrow \mathcal {L}^p(\mathcal {P}\mathcal {C}^{\eta _1}\times \cdots \times \mathcal {P}\mathcal {C}^{\eta _1},B^{\eta _2})\) is C k and \(D^p\tilde R_{\delta ,s} = \tilde R^{(p)}_{\delta ,s}\).

Lemma I.5.6.5

Let 1 ≤ p  m, μ i > 0 for i = 1, …, p, μ = μ 1 + ⋯ + μ p and η  μ. Then, \(\tilde R_{\delta ,s}^{(p)}:V_\delta ^\sigma \rightarrow \mathcal {L}^p(\mathcal {P}\mathcal {C}^{\mu _1}\times \cdots \times \varPi ^{\mu _p},B^\eta )\) is C kp provided η > μ + (k  p)σ.

Lemma I.5.6.6

Let 1 ≤ p  k, μ i > 0 for i = 1, …, p, μ = μ 1 + ⋯μ p and η > μ + σ for some σ > 0. Let \(X:\mathcal {R}\mathcal {C}\mathcal {R}_c(s)\rightarrow V_\delta ^\sigma \) be C 1 . Then, \(\tilde R_{\delta ,s}^{(p)}\circ X:\mathcal {R}\mathcal {C}\mathcal {R}_c(s)\rightarrow \mathcal {L}^p(\mathcal {P}\mathcal {C}^{\mu _1}\times \cdots \times \varPi ^{\mu _p},B^\eta )\) is C 1 and

$$\displaystyle \begin{aligned}D\left(\tilde R_{\delta,s}^{(p)}\circ X\right)(\phi)(v_1,\dots,v_p,\psi)=\tilde R_{\delta,s}^{(p+1)}(X(\phi))(v_1,\dots,v_p,X^{\prime}(\phi)\psi).\end{aligned}$$

6.4 Proof of Smoothness of the Centre Manifold

With our preparations complete, we can formulate and prove the statement concerning the smoothness of the centre manifold.

Theorem I.5.6.1

Let \(\mathcal {J}_s^{\eta _2,\eta _1}:\mathcal {P}\mathcal {C}^{\eta _1,s}\rightarrow \mathcal {P}\mathcal {C}^{\eta _2,s}\) denote the (continuous) embedding operator for η 1 ≤ η 2 . Let \([\tilde \eta ,\overline \eta ]\subset (0,\min \{-a,b\})\) be such that \(m\tilde \eta <\overline \eta \) . Then, for each p ∈{1, …, m} and \(\eta \in (p\tilde \eta ,\overline \eta ]\) , the mapping \(J_s^{\eta \tilde \eta }\circ \tilde u_{\tilde \eta ,s}:\mathcal {R}\mathcal {C}\mathcal {R}_c(s)\rightarrow \mathcal {P}\mathcal {C}^{\eta ,s}\) is of class C p provided δ > 0 is sufficiently small.

Proof

The proof here follows the same lines as Theorem 7.7 from Section IX of [41]. To begin, we choose δ > 0 small enough so that Lemma I.5.3.1 is satisfied in addition to having \(NL_\delta ||\mathcal {K}^\eta _s||<\frac {1}{4}\) for all \(\eta \in [\tilde \eta ,\overline \eta ]\). Remark that this condition ensures that the centre manifold has range in V η. By Lemma I.5.1.3 and Corollary I.5.1.1, this can always be done in such a way that the inequality holds for all \(s\in \mathbb {R}\).

We proceed by induction on k. For p = 1 = k, we let \(\eta \in (\tilde \eta ,\overline \eta ]\) and show that Lemma I.5.6.1 applies with

$$\displaystyle \begin{aligned} Y_0&=V_\delta^{\tilde\eta,s},&Y&=\mathcal{P}\mathcal{C}^{\tilde\eta,s},&Y_1&=\mathcal{P}\mathcal{C}^{\eta,s},&\varLambda&=\mathcal{R}\mathcal{C}\mathcal{R}_c(s)\\ f(u,\varphi)&=\tilde{\mathcal{G}}_\delta^{\tilde\eta,s}(u,\varphi),&f^{(1)}(u,\varphi)&=\mathcal{K}_s^{\tilde\eta}\circ\tilde R_{\delta,s}^{(1)}(u),&f_1^{(1)}(u,\varphi)&=\mathcal{K}_s^{\eta}\circ\tilde R_{\delta,s}^{(1)}(u), \end{aligned} $$

with embeddings \(J=\mathcal {J}_s^{\eta \tilde \eta }\) and \(J_0:V_\delta ^{\tilde \eta ,s}\hookrightarrow \mathcal {P}\mathcal {C}^{\tilde \eta ,s}\). To check condition b1, we must first verify the C 1 smoothness of

$$\displaystyle \begin{aligned}V_\delta^{\tilde\eta,s}\times\mathcal{R}\mathcal{C}\mathcal{R}_c(s)\ni (u,\varphi)\mapsto g(u,\varphi)=\mathcal{J}_s^{\eta\tilde\eta}\left(U(\cdot,s)\varphi + \mathcal{K}^{\tilde\eta}_s\circ\tilde R_{\delta,s} (J_0 u) \right).\end{aligned}$$

The embedding operator \(\mathcal {J}_s^{\eta \tilde \eta }\) is itself C 1, as is φU(⋅, s)φ and \(J_0 u\mapsto \tilde R_{\delta ,s}(J_0 u)\), the latter due to Lemma I.5.6.4. C 1 smoothness of g then follows by continuity of the linear embedding J 0. Verification of the equalities D 1 g(u, φ)ξ = Jf (1)(J 0 u, φ)J 0 ξ and \(Jf^{(1)}(J_0 u,\varphi )\xi = f_1^{(1)}(J_0u,\varphi )J\xi \) is straightforward. Condition b2 follows by boundedness of the embedding operators and the small Lipschitz constant for \(\tilde {\mathcal {G}}_{\delta ,s}^{\tilde \eta ,s}\). For condition b3, the fixed point is \(\tilde u_{\tilde \eta ,s}:\mathcal {R}\mathcal {C}\mathcal {R}_c(s)\rightarrow \mathcal {P}\mathcal {C}^{\tilde \eta ,s}\), and we may factor it as \(\tilde u_{\tilde \eta ,s} = J_0\circ \varPhi \) with \(\varPhi :\mathcal {R}\mathcal {C}\mathcal {R}_c(s)\rightarrow V_\delta ^{\tilde \eta ,s}\) defined by \(\varPhi (\varphi )=\tilde u_{\tilde \eta ,s}(\varphi )\); the latter is continuous by Theorem I.5.2.1, and the factorization is justified by Lemma I.5.3.1. To check condition b4, we must verify that

$$\displaystyle \begin{aligned}V_\delta^{\tilde\eta,s}\times\mathcal{R}\mathcal{C}\mathcal{R}_c(s)\ni (u,\varphi)\mapsto f_0(u,\varphi)=\tilde{\mathcal{G}}_{\delta,s}^{\tilde\eta,s}(J_0 u,\varphi)\end{aligned}$$

has a continuous partial derivative in its second variable—this is clear since f 0 is linear in φ. Finally, condition b5 requires us to verify that the map \((u,\varphi )\mapsto \mathcal {J}_s^{\eta \tilde \eta }\circ \mathcal {K}_s^{\tilde \eta }\circ \tilde R_{\delta ,s}^{(1)}(J_0 u)\) is continuous from \(V_\delta ^{\tilde \eta ,s}\times \mathcal {R}\mathcal {C}\mathcal {R}_c(s)\) into \(\mathcal {L}(\mathcal {R}\mathcal {C}\mathcal {R}_c(s),\mathcal {P}\mathcal {C}^{\eta ,s})\), but this once again follows by the continuity of the embedding operators and the smoothness of \(\tilde R_{\delta ,s}\) from Lemma I.5.6.4.

The conditions of Lemma I.5.6.1 are satisfied, and we conclude that \(\mathcal {J}^{\eta \tilde \eta }\circ \tilde u_{\tilde \eta ,s}\) is of class C 1 and that the derivative \(D(\mathcal {J}^{\eta \tilde \eta }\circ \tilde u_{\tilde \eta ,s}(\varphi ))\in \mathcal {L}(\mathcal {R}\mathcal {C}\mathcal {R}_c(s),\) \(\mathcal {P}\mathcal {C}^{\eta ,s})\) is the unique solution w (1) of the equation

$$\displaystyle \begin{aligned} w^{(1)}=\mathcal{K}^{\tilde\eta}_s\circ\tilde R_{\delta,s}^{(1)}(\tilde u_s^{\tilde\eta}(\varphi))w^{(1)}+U(\cdot,s):=F_1(w^{(1)},\varphi). \end{aligned} $$
(I.5.26)

The mapping \(F_1:\mathcal {L}(\mathcal {R}\mathcal {C}\mathcal {R}_c(s),\mathcal {P}\mathcal {C}^{\eta ,s})\times \mathcal {R}\mathcal {C}\mathcal {R}_c(s)\rightarrow \mathcal {L}(\mathcal {R}\mathcal {C}\mathcal {R}_c(s),\mathcal {P}\mathcal {C}^{\eta ,s})\) is a uniform contraction for \(\eta \in [\tilde \eta ,\overline \eta ]\)—indeed, F 1(⋅, φ) is Lipschitz continuous with Lipschitz constant \(\tilde L_\delta \cdot ||\mathcal {K}^\eta _s||<\frac {1}{4}\); this follows from Lemma I.5.6.3 and is independent of s. Thus, \(\tilde u_s^{(1)}(\varphi )\in \mathcal {L}(\mathcal {R}\mathcal {C}\mathcal {R}_c(s),\mathcal {P}\mathcal {C}^{\tilde \eta ,s})\hookrightarrow \mathcal {L}(\mathcal {R}\mathcal {C}\mathcal {R}_c(s),\mathcal {P}\mathcal {C}^{\eta ,s})\) for \(\eta \geq \tilde \eta \). Moreover, \(\tilde u_s^{(1)}:\mathcal {R}\mathcal {C}\mathcal {R}_c(s)\rightarrow \mathcal {L}(\mathcal {R}\mathcal {C}\mathcal {R}_c(s),\mathcal {P}\mathcal {C}^{\eta ,s})\) is continuous if \(\eta \in (\tilde \eta ,\overline \eta ]\).

Now, let 1 ≤ p ≤ k for k ≥ 1 and suppose that for all q ∈{1, …, p} and all \(\eta \in (q\tilde \eta ,\overline \eta ]\), the mapping

$$\displaystyle \begin{aligned}\mathcal{J}_s^{\eta\tilde\eta}\circ\tilde u_{\tilde\eta,s}:\mathcal{R}\mathcal{C}\mathcal{R}_c(s)\rightarrow \mathcal{P}\mathcal{C}^{\eta,s}\end{aligned}$$

is of class C q with \(D^q(\mathcal {J}_s^{\eta \tilde \eta }\circ \tilde u_s^{\tilde \eta })=\mathcal {J}_s^{\eta \tilde \eta }\circ \tilde u_{\tilde \eta ,s}^{(q)}\) and \(\tilde u_{\tilde \eta ,s}^{(q)}(\varphi )\in \mathcal {L}^q(\mathcal {R}\mathcal {C}\mathcal {R}_c(s),\) \(\mathcal {P}\mathcal {C}^{q\tilde \eta ,s})\) for each \(\varphi \in \mathcal {R}\mathcal {C}\mathcal {R}_c(s)\), such that the mapping

$$\displaystyle \begin{aligned}\mathcal{J}_s^{\eta\tilde\eta}\circ \tilde u_{\tilde\eta,s}^{(q)}:\mathcal{R}\mathcal{C}\mathcal{R}_c(s)\rightarrow\mathcal{L}^q(\mathcal{R}\mathcal{C}\mathcal{R}_c(s),\mathcal{P}\mathcal{C}^{\eta,s})\end{aligned}$$

is continuous for \(\eta \in (q\tilde \eta ,\overline \eta ]\). Suppose additionally that \(\tilde u_{\tilde \eta ,s}^{(q)}(\varphi )\) is the unique solution w (p) of an equation

$$\displaystyle \begin{aligned}w^{(p)}=\mathcal{K}_s^{\tilde\eta p}\circ\tilde R_{\delta,s}^{(1)}(\tilde u_{\tilde\eta,s}(\varphi))w^{(p)}+H_{\tilde\eta}^{(p)}(\varphi):=F_{\tilde\eta}^{(p)}(w^{(p)},\varphi),\end{aligned} $$
(I.5.27)

with H 1 = U(⋅, s), and \(H^{(p)}_{x}(\varphi )\) for p ≥ 2 is a finite sum of terms of the form

$$\displaystyle \begin{aligned}\mathcal{K}^{px}_s\circ \tilde R^{(q)}_{\delta,s}(\tilde u_{\tilde\eta,s}(\varphi))(\tilde u_{\tilde\eta,s}^{(r_1)}(\varphi),\cdots,\tilde u_{\tilde\eta,s}^{(r_q)}(\varphi))\end{aligned}$$

with 2 ≤ q ≤ p, 1 ≤ r i < p for i = 1, …, q, such that r 1 + ⋯ + r q = p. Under such assumptions, the mapping \(F^{(p)}_{\tilde \eta }:\mathcal {L}^p(\mathcal {R}\mathcal {C}\mathcal {R}_c(s),\mathcal {P}\mathcal {C}^{\eta ,s})\times \mathcal {R}\mathcal {C}\mathcal {R}_c(s)\rightarrow \mathcal {L}^p(\mathcal {R}\mathcal {C}\mathcal {R}_c(s),\mathcal {P}\mathcal {C}^{\eta ,s})\) is a uniform contraction for all \(\eta \in [p\tilde \eta ,\overline \eta ]\); see Lemma I.5.6.3.

Next, choose some \(\eta \in ((p+1)\tilde \eta ,\overline \eta ]\), \(\sigma \in (\tilde \eta ,\eta /(p+1)]\) and μ ∈ ((p + 1)σ, η). We will verify the conditions of Lemma I.5.6.1 with the spaces and functions

$$\displaystyle \begin{aligned} Y_0&=\mathcal{L}^p(\mathcal{R}\mathcal{C}\mathcal{R}_c(s),\mathcal{P}\mathcal{C}^{p\sigma,s}),&Y&=\mathcal{L}^p(\mathcal{R}\mathcal{C}\mathcal{R}_c(s),\mathcal{P}\mathcal{C}^{\mu,s}),\\ {} Y_1&=\mathcal{L}^p(\mathcal{R}\mathcal{C}\mathcal{R}_c(s),\mathcal{P}\mathcal{C}^{\eta,s})\\ {} f(u,\varphi)&=\mathcal{K}^{\mu}_s\circ\tilde R_{\delta,s}^{(1)}(\tilde u_{\tilde\eta,s}(\varphi))u+H^{(p)}_{\mu/p}(\varphi),&\varLambda&=\mathcal{R}\mathcal{C}\mathcal{R}_c(s),\\ {} f^{(1)}(u,\varphi)&=\mathcal{K}^{\mu}_s\circ\tilde R_{\delta,s}^{(1)}(\tilde u_{\tilde\eta,s}(\varphi))\in\mathcal{L}(Y),\\ {} f^{(1)}_1(u,\varphi)&=\mathcal{K}^{\eta}_s\circ\tilde R_{\delta,s}^{(1)}(\tilde u_{\tilde\eta,s}(\varphi))\in\mathcal{L}(Y_1). \end{aligned} $$

We begin with the verification of condition b1. We must check that

$$\displaystyle \begin{aligned}\mathcal{L}^p(\mathcal{R}\mathcal{C}\mathcal{R}_c(s),\mathcal{P}\mathcal{C}^{p\sigma,s})\times\mathcal{R}\mathcal{C}\mathcal{R}_c(s)\ni (u,\varphi)\mapsto \mathcal{J}^{\eta\mu}\circ\mathcal{K}^{\mu}_s\circ\tilde R_{\delta,s}^{(1)}(\tilde u_{\tilde\eta,s}(\varphi))u+\mathcal{J}^{\eta\mu}\circ H^{(p)}_{\mu/p}(\varphi)\end{aligned}$$

is of class C 1, where now \(\mathcal {J}^{\eta _2\eta _1}:\mathcal {L}^p(\mathcal {R}\mathcal {C}\mathcal {R}_c(s),\mathcal {P}\mathcal {C}^{\eta _1,s})\hookrightarrow \mathcal {L}^p(\mathcal {R}\mathcal {C}\mathcal {R}_c(s),\) \(\mathcal {P}\mathcal {C}^{\eta _2,s})\). The above mapping is C 1 with respect to \(u\in \mathcal {L}^p(\mathcal {R}\mathcal {C}\mathcal {R}_c(s),\mathcal {P}\mathcal {C}^{p\sigma ,s})\) since it is linear in this variable. With respect to \(\varphi \in \mathcal {R}\mathcal {C}\mathcal {R}_c(s)\), we have that \(\varphi \mapsto \mathcal {J}^{\eta \mu }\mathcal {K}^{\mu }_s\circ \tilde R_{\delta ,s}^{(1)}(\tilde u_{\tilde \eta ,s}(\varphi ))\) is C 1: this follows by Lemma I.5.6.6 with μ > (p + 1)σ and the C 1 smoothness of \(\varphi \mapsto \mathcal {J}^{\sigma \tilde \eta }\circ \tilde u_{\tilde \eta ,s}(\varphi )\) with \(\sigma >\tilde \eta \). For the C 1 smoothness of the portion \(\varphi \mapsto \mathcal {J}^{\eta \mu }H^{(p)}_{\mu /p}(\varphi )\), we get differentiability from Lemma I.5.6.6; we have that the derivative of \(\varphi \mapsto H^{(p)}_{\mu /p}(\varphi )\) is a sum of terms of the form

and each \(\tilde u_{\tilde \eta ,s}^{(j)}\) is understood as a map into \(\mathcal {P}\mathcal {C}^{j\sigma ,s}\). Applying Lemma I.5.6.3 with μ > (p + 1)σ grants continuity of \(DH^{(p)}_{\mu /p}(\varphi )\) and, subsequently, to \(\mathcal {J}^{\eta \mu }DH_{\mu /p}^{(p)}(\varphi )\). The other embedding properties of condition b1 are easily checked. Condition b4 can be proven similarly.

The Lipschitz condition and boundedness of b2 follow by the choice of δ > 0 at the beginning and the uniform contractivity of H p described above. Condition b3 is proven by writing

$$\displaystyle \begin{aligned}\mathcal{J}^{\eta\mu}\circ\mathcal{K}^\mu_s\circ\tilde R_{\delta,s}^{(1)}(\tilde u_{\tilde\eta,s})(\varphi))=\mathcal{K}^\eta_s\circ\tilde R_{\delta,s}^{(1)}(\tilde u_{\tilde\eta,s}(\varphi))\end{aligned}$$

and applying Lemma I.5.6.3 together with the C 1 smoothness of \(\tilde u_{\tilde \eta ,s}\) to obtain the continuity of \(\varphi \mapsto \tilde R_{\delta ,s}^{(1)}(\tilde u_{\tilde \eta ,s})\in \mathcal {L}(Y,Y_1)\). This also proves the final condition b5 of Lemma I.5.6.1, and we conclude that \(\tilde u_{\tilde \eta ,s}^{(p)}:\mathcal {R}\mathcal {C}\mathcal {R}_c(s)\rightarrow \mathcal {L}^p(\mathcal {R}\mathcal {C}\mathcal {R}_c(s),\mathcal {P}\mathcal {C}^{\eta ,s})\) is of class C 1 with \(\tilde u_{\tilde \eta ,\mu }^{(p+1)}=D\tilde u_{\tilde \eta ,s}^{(p)}\in \mathcal {L}^{(p+1)}(\mathcal {R}\mathcal {C}\mathcal {R}_c(s),\) \(\mathcal {P}\mathcal {C}^{\eta ,\mu })\) given by the unique solution w (p+1) of the equation

$$\displaystyle \begin{aligned}w^{(p+1)}=\mathcal{K}^\mu_s\circ\tilde R_{\delta,s}^{(1)}(\tilde u_{\tilde\eta,s}(\varphi))w^{(p+1)}+H_{\mu/(p+1)}^{(p+1)}(\varphi),\end{aligned} $$
(I.5.28)

where \(H_{\mu /(p+1)}^{(p+1)}(\varphi )=\mathcal {K}^\mu _s\circ \tilde R_{\delta ,s}^{(2)}(\tilde u_{\tilde \eta ,s}(\varphi ))(\tilde u_{\tilde \eta ,s}^{(p)}(\varphi ),\tilde u_{\tilde \eta ,s}^{(1)}(\varphi ))+DH_{\mu /p}^{(p)}(\varphi )\). Similar arguments to the proof of the case k = 1 show that the fixed point w (p+1) is also contained in \(\mathcal {L}^{(p+1)}(\mathcal {R}\mathcal {C}\mathcal {R}_c(s),\mathcal {P}\mathcal {C}^{\tilde \eta (p+1),s})\), and the proof is complete. □

Corollary I.5.6.1

\(\mathcal {C}:\mathcal {R}\mathcal {C}\mathcal {R}_c\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) is C m and tangent at the origin to the centre bundle \(\mathcal {R}\mathcal {C}\mathcal {R}_c\) . More precisely, \(\mathcal {C}(t,\cdot ):\mathcal {R}\mathcal {C}\mathcal {R}_c(t)\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) is C m and \(D\mathcal {C}(t,0)\phi =\phi \) for all \(\phi \in \mathcal {R}\mathcal {C}\mathcal {R}_c(t)\).

Proof

Let \(\tilde \eta ,\eta \) be as in the proof of Theorem I.5.6.1. Define the evaluation map \(ev_t:\mathcal {P}\mathcal {C}^{\eta }\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) by ev t(f) = f(t). Since we can decompose the centre manifold as

$$\displaystyle \begin{aligned}\mathcal{C}(t,\phi)=ev_t(\tilde u_{t}(\phi))=ev_t(\mathcal{J}_t^{\eta\tilde\eta}\tilde u_t(\phi)),\end{aligned}$$

boundedness of the linear evaluation map on the space \(\mathcal {P}\mathcal {C}^{\eta ,t}\) then implies the C k smoothness of \(\mathcal {C}(t,\cdot )\). To obtain the tangent property, we remark that

$$\displaystyle \begin{aligned}D\mathcal{C}(t,0)\phi = ev_t\left(D\left(\mathcal{J}^{\eta\tilde\eta}_t\circ \tilde u_t(0)\right)\phi\right)=ev_t\left(\tilde u_{\eta,t}^{(1)}(0)\phi\right).\end{aligned}$$

From equation (I.5.26) and Theorem I.5.3.1, we obtain \(\tilde u_{\eta ,t}^{(1)}(0)=U(\cdot ,t)\), from which it follows that \(D\mathcal {C}(t,0)\phi = \phi \), as claimed. □

As a secondary corollary, we can prove that each derivative of the centre manifold is uniformly Lipschitz continuous. The proof is similar to that of Corollary I.5.2.1 if one takes into account the representation of the derivatives \(\tilde u_{\tilde \eta ,s}^{(p)}\) as solutions of the fixed-point equations (I.5.28), whose right-hand side is a contraction with Lipschitz constant independent of s.

Corollary I.5.6.2

For each p ∈{1, …, k}, there exists a constant L(p) > 0 such that the centre manifold satisfies ||D p C(t, ϕ) − D p C(t, ψ)||≤ L(p)||ϕ  ψ|| for all \(t\in \mathbb {R}\) and \(\phi ,\psi \in \mathcal {R}\mathcal {C}\mathcal {R}_c(t)\).

Additionally, each of the Taylor coefficients of the centre manifold is in fact bounded. This observation will be important in later chapters.

Corollary I.5.6.3

There exist constants γ 0, …, γ m such that the C m centre manifold satisfies \(||D^j\mathcal {C}(t,0)||\leq \gamma _j\) for all \(t\in \mathbb {R}\) . If the centre manifold is C m+1 , the Taylor remainder

$$\displaystyle \begin{aligned}R_m(t,\phi)=\mathcal{C}(t,\phi)-\sum_{j=1}^m D^j\mathcal{C}(t,0)\phi^j\end{aligned}$$

admits an estimate of the form ||R m(t, ϕ)||≤ γ (m)||ϕ||m+1 for \(\phi \in B_\delta (0)\cap \mathcal {R}\mathcal {C}\mathcal {R}_c(t)\) . The constants γ (m) and δ can be chosen independent of t.

Proof

From equation (I.5.28), the jth Taylor coefficient is given by

$$\displaystyle \begin{aligned}D^j\mathcal{C}(t,0)=ev_t\left(H_{\mu/(j+1)}^{(j+1)}(0)\right).\end{aligned}$$

The first two coefficients (j = 0 and j = 1) are zero and the identity, respectively. These are bounded. A straightforward inductive argument on the form of the maps H then grants the uniform boundedness of \(D^j\mathcal {C}(t,0)\). The claimed bound on the remainder term then follows from the uniform boundedness of \(D^{m+1}\mathcal {C}(t,0)\) and the Lipschitz constant L(j + 1) from Corollary I.5.6.2. □

We readily obtain the smoothness of the centre manifold in the case where the semilinear equation is periodic. In particular, in such a situation some of the assumptions H.1–H.8 are satisfied automatically and can be ignored.

Corollary I.5.6.4

Suppose the semilinear equation (I.4.1) (I.4.2) satisfies the following conditions.

  1. P.1

    The equation is periodic with period T and c impulses per period. That is, L(t + T) = L(t) and f(t + T, ⋅) = f(t) for all \(t\in \mathbb {R}\) , and B(k + c) = B(k) , g(k + c, ⋅) = g(k, ⋅) and t k+c = t k + T for all \(k\in \mathbb {Z}\).

  2. P.2

    Conditions H.1–H.3 and H.5–H.6 are satisfied.

Then, the conclusions of Corollaries I.5.6.1 and I.5.6.2 hold.

6.5 Periodic Centre Manifold

In this section we will prove that the centre manifold is itself a periodic function, provided the conditions P.1–P.2 of Corollary I.5.6.4 are satisfied. We begin with a preparatory lemma.

Lemma I.5.6.7

Define the operator \(N_s:\mathcal {R}\mathcal {C}\mathcal {R}_c(s)\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}_c(s)\) by

$$\displaystyle \begin{aligned} N_s(\phi)=P_c(s)S(s+T,s)\mathcal{C}(s,\phi).\end{aligned}$$

This operator is well-defined and invertible in a neighbourhood of \(0{\in }\mathcal {R}\mathcal {C}\mathcal {R}_c(s)\) . Moreover, the neighbourhood can be written \(U\cap \mathcal {R}\mathcal {C}\mathcal {R}_c(s)\) for some open neighbourhood \(U\subset \mathcal {R}\mathcal {C}\mathcal {R}\) of \(0\in \mathcal {R}\mathcal {C}\mathcal {R}\) , independent of s.

Proof

To show that N s is invertible in a neighbourhood of the origin, we will use the inverse function theorem. The Fréchet derivative of N s at 0 is given by

$$\displaystyle \begin{aligned} D N_s(0)\phi&=P_c(s)\circ DS(s+T,s)(0)\circ D\mathcal{C}(s,0)\phi\\ &=P_c(s+T)\circ U(s+T,s)\phi\\ &=U_c(s+T,s)\phi, \end{aligned} $$

where we used Corollary I.5.6.1 to calculate \(D\mathcal {C}(s,0)\) and Theorem I.4.2.1 to calculate DS(s + T, s)(0). Since U(s + T, s) is an isomorphism (Theorem I.3.1.3) of \(\mathcal {R}\mathcal {C}\mathcal {R}_c(s)\) with \(\mathcal {R}\mathcal {C}\mathcal {R}_c(s+T)=\mathcal {R}\mathcal {C}\mathcal {R}_c(s)\), we obtain the claimed local invertibility.

To show that the neighbourhood may be written as claimed, we remark that DN s(x) is uniformly convergent (in the variable s) as x → 0 to DN s(0). Indeed, we have the estimate

$$\displaystyle \begin{aligned}||D N_s(x)-D N_s(0)||\leq||U_c(s+T,s)P_c(s)||\cdot||D\mathcal{C}(s,x)-D\mathcal{C}(s,0)||,\end{aligned}$$

and the Lipschitz property of Corollary I.5.6.2 together with uniform boundedness of the projector P c(s) and centre monodromy operator U c(s + T, s) grants the uniform convergence as x → 0. As a consequence, the implicit function may be defined on a neighbourhood that does not depend on s. □

Theorem I.5.6.2

There exists δ > 0 such that \(\mathcal {C}(s+T,\phi )=\mathcal {C}(s,\phi )\) for all \(s\in \mathbb {R}\) whenever ||ϕ||≤ δ.

Proof

By Lemma I.5.6.7, there exists δ > 0 such that if ||ϕ||≤ δ, we can write ϕ = N s(ψ) for some \(\psi \in \mathcal {R}\mathcal {C}\mathcal {R}_c(s)\). By Theorem I.5.3.1 and the periodicity condition P.1,

$$\displaystyle \begin{aligned} \mathcal{C}(s+T,\phi)&=\mathcal{C}(s+T, N_s(\psi))\\ &=\mathcal{C}(s+T,P_c(s+T)S(s+T,s)\mathcal{C}(s,\psi))\\ &=S(s+T,s)\mathcal{C}(s,\psi)\\ &=S(s,s-T)\mathcal{C}(s,\psi)\\ &=\mathcal{C}(s,P_c(s)S(s,s-T)\mathcal{C}(s,\psi))\\ &=\mathcal{C}(s,P_c(s)S(s+T,s)\mathcal{C}(s,\psi))\\ &=\mathcal{C}(s, N_s(\psi))=\mathcal{C}(s,\phi), \end{aligned} $$

where the identity S(s + T, s) = S(s, s − T) follows due to periodicity and Lemma I.4.1.1. □

7 Regularity of Centre Manifolds with Respect to Time

In the previous section we were concerned with the smoothness of \(\phi \mapsto \mathcal {C}(t,\phi )\). To contrast, in this section we are interested in to what degree the function \(t\mapsto D^k\mathcal {C}(t,\phi )\) is differentiable, for each k = 1, …, m. We should generally not expect this function to be differentiable; indeed, it would be very surprising if this were true given that the process U(t, s) associated to the linearization is generally discontinuous everywhere (recall the discussion of Sect. I.2.2.2).

Perhaps it is better to motivate our ideas on regularity in time by explaining how we will be using the centre manifold in applications. From Taylor’s theorem, \(\mathcal {C}(t,\phi )\) admits an expansion of the form

$$\displaystyle \begin{aligned} \mathcal{C}(t,\phi)=D\mathcal{C}(t,0)\phi + \frac{1}{2}D^2\mathcal{C}(t,0)[\phi]^2 + \cdots + \frac{1}{m!}D^m\mathcal{C}(t,0)[\phi]^m + O(||\phi||{}^{m+1}), \end{aligned} $$

where [ϕ]k = [ϕ, …, ϕ] with k factors, and the O(||ϕ||m+1) terms generally depend on t. By Theorem I.5.6.2, under periodicity assumptions these terms will be uniformly bounded in t for ||ϕ|| sufficiently small. This expansion can in principle be used in the dynamics equation (I.5.12)–(I.5.13) on the centre manifold or its integral version (I.5.10), which will permit us to classify bifurcations in impulsive RFDE. In later sections we will want to make these dynamics equations concrete—that is, to pose them in a concrete vector space such as \(\mathbb {R}^p\) for some \(p\in \mathbb {N}\). By analogy with ordinary and delay differential equations, this should also allow us to obtain a partial differential equation for the Taylor coefficients \(D^j\mathcal {C}(t,0)\). As these coefficients are time-varying, we should suspect this PDE to contain derivatives in time as well.

In summary, we need to consider the differentiability of the function \(t\mapsto D^j\mathcal {C}(t,0)\) for j = 1, …, m. Since we suspect that this function will not actually be differentiable, we might consider instead the differentiability of

$$\displaystyle \begin{aligned}t\mapsto D^j\mathcal{C}(t,0)[\phi_1,\dots,\phi_j](\theta)\end{aligned}$$

for each θ ∈ [−r, 0] and j-tuples ϕ 1, …, ϕ j. While a more realistic goal, even this is too strong a condition. The first differential \(D\mathcal {C}(t,0):\mathcal {R}\mathcal {C}\mathcal {R}_c(t)\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) of the centre manifold has a different domain for each t. As a consequence, we cannot even define the derivative of \(t\mapsto D\mathcal {C}(t,0)\phi (\theta )\), since we must have \(\phi \in \mathcal {R}\mathcal {C}\mathcal {R}_c(t)\) for the right-hand side to be well-defined. This problem is apparent for all higher differentials.

7.1 A Coordinate System and Pointwise PC 1, m- Regularity

To address the issue, the centre manifold having a “time-varying domain”, let us first assume that \(\mathcal {R}\mathcal {C}\mathcal {R}_c\) is finite-dimensional—that is, H.9 is satisfied. Note if we fix a sufficiently well-behaved coordinate system in \(\mathcal {R}\mathcal {C}\mathcal {R}_c(t)\)—for example, let ϕ 1, …, ϕ p be a basis for \(\mathcal {R}\mathcal {C}\mathcal {R}_c(0)\) and define ϕ i(t) = U c(t, 0)ϕ i for i = 1, …, p to be a basis for \(\mathcal {R}\mathcal {C}\mathcal {R}_c(t)\)—then the function w(t) of Lemma I.5.4.1 and Theorem I.5.4.1 can be written as w(t) = Φ t z(t) for \(z\in \mathbb {R}^p\), where \(\varPhi _t=[\begin {array}{ccc}\phi _1(t)&\cdots &\phi _p(t)\end {array}]\). This motivates us to consider instead a centre manifold in these coordinates.

Definition I.5.7.1

The function \(C:\mathbb {R}\times \mathbb {R}^p\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) defined by

$$\displaystyle \begin{aligned} C(t,z)=\mathcal{C}(t,\varPhi_tz) \end{aligned} $$
(I.5.29)

is the centre manifold in terms of the basis array Φ.

If \(\mathcal {C}(t,\cdot )\) is C m-smooth, the chain rule implies the same is true for C(t, ⋅). It follows that

$$\displaystyle \begin{aligned} \mathcal{C}(t,w(t))&=DC(t,0)z(t) + \frac{1}{2}D^2C(t,0)[z(t),z(t)] + \cdots\\ &\quad + \frac{1}{m!}D^mC(t,0)[z(t)]^m + O(||w(t)||{}^{m+1}), \end{aligned} $$

so insofar as dynamics on the centre manifold are concerned, it is enough to study the differentiability of tD j C(t, 0)[z 1, …, z p](θ) for p-tuples \(z_1,\dots ,z_p\in \mathbb {R}^p\). Specifically, the temporal regularity we will attempt to prove is given in the following definition.

Definition I.5.7.2

A function \(F:\mathbb {R}\times \mathbb {R}^p\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) is pointwise PC 1, m -regular at zero if it satisfies the following conditions:

  • xF(t, x) is C m in a neighbourhood of \(0\in \mathbb {R}^p\), uniformly in t;

  • for j = 0, …, m, D j F(t, 0)[z 1, …, z j](θ) is differentiable from the right with limits on the left separately with respect to t and θ, for all z 1, …, z j \(\in \mathbb {R}^p\).

With this in mind, the result we will prove is as follows.

Theorem I.5.7.1

Let ϕ 1, …, ϕ p be a basis for \(\mathcal {R}\mathcal {C}\mathcal {R}_c(0)\) , and define

$$\displaystyle \begin{aligned}\varPhi_t=[\begin{array}{ccc}U_c(t,0)\phi_1&\cdots&U_c(t,0)\phi_p\end{array}].\end{aligned}$$

If the centre manifold \(\mathcal {C}:\mathcal {R}\mathcal {C}\mathcal {R}_c\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) is (fibrewise) C m , then the centre manifold in terms of the basis array Φ is pointwise PC 1, m -regular at zero provided certain technical conditions are met (assumption H.10). Moreover, if \(\theta \in \mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},[-r,0])\) , then tC(t, z)(θ(t)) is continuous from the right with limits on the left for all \(z\in \mathbb {R}^p\) , and zC(t, z) is Lipschitz continuous, uniformly for \(t\in \mathbb {R}\).

The technical condition will be introduced in Sect. I.5.7.3.

7.2 Reformulation of the Fixed-Point Equation

Given that \(C(t,z)=\mathcal {C}(t,\varPhi _tz)\), we can equivalently write C(t, z) = v t(z)(t) with \(v_t:\mathbb {R}^p\rightarrow \mathcal {P}\mathcal {C}^{\eta ,t}\) the unique fixed point of the equation

$$\displaystyle \begin{aligned}v_t(z) = \varPhi_{(\cdot)}z + \mathcal{K}_t^\eta(R_{\delta,t}(v_t(z)))\end{aligned} $$
(I.5.30)

for each |z| small enough, where \(\mathcal {K}_t^\eta \) is as defined in Eq. (I.5.3) and R δ,t is the substitution operator from Sect. I.5.1.3. Notice also that the nonlinear operator defining the right-hand side of the equation admits the same Lipschitz constant as original fixed-point operator \(\mathcal {G}\) from Eq. (I.5.7). Up to an appropriate embedding, the jth differential \(v^{(j)}_t\) satisfies for j ≥ 2 a fixed-point equation of the form

$$\displaystyle \begin{aligned}v^{(j)}_t=\mathcal{K}^\eta_t\circ R_{\delta,t}^{(1)}(v_t)v_t^{(j)} + \mathcal{K}^\eta_t\circ H^{(j)}(v_t,v_t^{(1)},\dots,v_t^{(j-1)}), \end{aligned} $$
(I.5.31)

with the right-hand side defining a uniform contraction in \(v_t^{(j)}\). H (j) can be written as a finite linear combination of terms of the form

$$\displaystyle \begin{aligned}R_{\delta,t}^{(q)}(v_t)[v_t^{(r_1)},\dots,v_t^{(r_q)}],\end{aligned}$$

for q ∈{2, …, j}, such that r 1 + ⋯ + r q = j. All of this follows from (the proof of) Theorem I.5.6.1. Explicitly,

$$\displaystyle \begin{aligned}H^{(j)} = -R^{(1)}_{\delta,t}(v_t)v_t^{(j)} + D^j_z[R_{\delta,t}(v_t(z))],\end{aligned}$$

and one can verify by induction on j that H (j) contains no term of the form \(R^{(1)}(v_t)v_t^{(j)}\) and that the coefficients in the aforementioned linear combination are independent of t. To compare, for j = 0 and j = 1, we can compute directly from the definition of the fixed point and by using Corollary I.5.6.1 and the chain rule that

$$\displaystyle \begin{aligned}v_t(0)(\cdot)&= 0, \end{aligned} $$
(I.5.32)
$$\displaystyle \begin{aligned}v_t^{(1)}(0)(\cdot) & = \varPhi_{(\cdot)}. \end{aligned} $$
(I.5.33)

The assumption Df(t, 0) = Dg(k, 0) = 0 implies \(R_\delta ^{(1)}(0)=0\), so the fixed-point equation (I.5.31) implies

$$\displaystyle \begin{aligned} v_t^{(j)}(0)(\mu) = \left[\mathcal{K}^\eta_t\circ H^{(j)}(0,\varPhi_{(\cdot)},v_t^{(2)}(0)(\cdot),\dots,v_t^{(j-1)}(0)(\cdot))\right](\mu) \end{aligned} $$
(I.5.34)

for j ≥ 2. By definition of the basis array Φ, the following lemma is proven.

Lemma I.5.7.1

If the centre manifold is C 1 , then the centre manifold in terms of the basis array Φ is pointwise PC 1, 1 -regular at zero. If the centre manifold is C m , then the centre manifold in terms of the basis array Φ is pointwise PC 1, m -regular at zero provided

$$\displaystyle \begin{aligned} t&\mapsto \left[\mathcal{K}^\eta_t\circ H^{(j)}(0,\varPhi_{(\cdot)},v_t^{(2)}(0)(\cdot),\dots,v_t^{(j-1)}(0)(t))\right](t)[z_1,\dots,z_j](\theta),\\ \theta&\mapsto \left[\mathcal{K}^\eta_t\circ H^{(j)}(0,\varPhi_{(\cdot)},v_t^{(2)}(0)(\cdot),\dots,v_t^{(j-1)}(0)(t))\right](t)[z_1,\dots,z_j](\theta) \end{aligned} $$

are each, for j = 2, …, m differentiable from the right with limits on the left, for all \(z_1,\dots ,z_j\in \mathbb {R}^p\).

7.3 A Technical Assumption on the Projections P c(t) and P u(t)

By definition of the bounded linear map \(\mathcal {K}^\eta _t\) from (I.5.3), it will be necessary to differentiate (in t) integrals involving terms of the form μU(t, μ)P s(μ)χ 0 and μU(t, μ)P u(μ)χ 0. Generally, if we assume \(\mathcal {R}\mathcal {C}\mathcal {R}_u(0)\) to be q-dimensional (guaranteed by Theorem I.3.1.3 if the linearization is periodic, for example), then we can fix a basis ψ 1, …, ψ q for \(\mathcal {R}\mathcal {C}\mathcal {R}_u(0)\) and construct a basis array

$$\displaystyle \begin{aligned}\varPsi_t=[\begin{array}{ccc}U_u(t,0)\psi_1&\cdots&U_u(t,0)\psi_q\end{array}]\end{aligned}$$

for \(\mathcal {R}\mathcal {C}\mathcal {R}_u(t)\) that is formally analogous to the basis array Φ t for the centre fibre bundle. Under spectral separation assumptions, \(U_u(t,s):\mathcal {R}\mathcal {C}\mathcal {R}_u(s)\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}_u(t)\) and \(U_c(t,s):\mathcal {R}\mathcal {C}\mathcal {R}_c(s)\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}_c(t)\) are topological isomorphisms, from which it follows that there exist unique \(Y_c(t)\in \mathbb {R}^{p\times n}\) and \(Y_u(t)\in \mathbb {R}^{q\times n}\) such that

$$\displaystyle \begin{aligned} P_c(t)\chi_0 &= \varPhi_tY_c(t),\\ P_u(t)\chi_0&=\varPsi_tY_u(t). \end{aligned} $$
(I.5.35)

Recall \(p=\dim (\mathcal {R}\mathcal {C}\mathcal {R}_c)\). Even under periodicity conditions, computing the action of these projections on the functional \(\chi _0\in \mathcal {R}\mathcal {C}\mathcal {R}([-r,0],\mathbb {R}^{n\times n})\) is quite nontrivial and requires computing the abstract contour integral (I.3.4). Though this can in principle be done numerically by discretizing the monodromy operator, there is little in the way of theoretical results guaranteeing, for example, that the matrix functions tY c(t) and tY u(t) are, respectively, elements of \(\mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},\mathbb {R}^{p\times n})\) and \(\mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},\mathbb {R}^{q\times n})\). Such a result would make the differentiation of the integrals appearing in the definition of \(\mathcal {K}^\eta _t\) much more reasonable. We therefore introduce another hypothesis. We will discuss it in a bit more detail in Sect. I.5.7.7.

  1. H.10

    There are (finite) basis arrays Φ and Ψ for \(\mathcal {R}\mathcal {C}\mathcal {R}_c\) and \(\mathcal {R}\mathcal {C}\mathcal {R}_u\), respectively, for which the matrix functions tY c(t) and tY u(t) from equation (I.5.35) are continuous from the right and possess limits on the left.

7.4 Proof of PC 1, m-Regularity at Zero

We deal first with the continuity of tC(t, z)(θ(t)) from the right and the existence of its left-limits. Since \(C(\cdot ,z)=v_t(z)(\cdot )\in \mathcal {P}\mathcal {C}^{\eta ,t}\), it can be identified with a history function tc t for some \(c\in \mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},\mathbb {R}^n)\). But this implies C(t, z)(θ(t)) = c t(θ(t)) = c(t + θ(t)). The conclusion follows because \(c\in \mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},\mathbb {R}^n)\) and \(\theta \in \mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},[-r,0])\), and right-continuity and limits respect composition. As for the Lipschitzian claim, it follows by similar arguments to the proof of the original centre manifold Theorem I.5.2.1 and Corollary I.5.2.1.

Using the definition of the linear map \(\mathcal {K}^\eta _t\) in (I.5.3) and equation (I.5.34), we can explicitly write \(v_t^{(j)}(0)(t)\) as

$$\displaystyle \begin{aligned} v_t^{(j)}(0)(t)&=\int_{-\infty}^t U(t,\mu)[I-P_c(\mu)-P_u(\mu)]\chi_0 \hat H^{(j)}_1(\mu)d\mu \\ &\quad - \int_t^\infty U(t,\mu)P_u(\mu)\chi_0 \hat H^{(j)}_1(\mu)d\mu\\ &\quad +\sum_{-\infty}^t U(t,t_i)[I-P_c(t_i)-P_u(t_i)]\chi_0 \hat H^{(j)}_2(t_i)dt_i \\ &\quad - \sum_t^\infty U(t,t_i)P_u(t_i)\chi_0 \hat H^{(j)}_2(t_i)dt_i, \end{aligned} $$

where each of \(\hat H_1^{(j)}(\mu )\) and \(\hat H_2^{(j)}(t_i)\) and H (j) are related by the equations

$$\displaystyle \begin{aligned} H^{(j)}&=\sum_i c_i R_{\delta,t}^{(r_i)}(0)[\varPhi_{(\cdot)}^{d_{i,1}},[v_t^{(2)}(0)(t)]^{d_{i,2}},\dots,[v_t^{(j-1)}(0)(t)]^{d_{i,j-1}}]\\ \hat H_1^{(j)}(\mu)&=\sum_i c_{i}D^{r_i}f(\mu,0)[\varPhi_{\mu}^{d_{i,1}},[v_t^{(2)}(0)(\mu)]^{d_{i,2}},\dots,[v_t^{(j-1)}(0)(\mu)]^{d_{i,j-1}}]\\ \hat H_2^{(j)}(t_k)&=\sum_i c_{i}D^{r_i}g(k,0)[\varPhi_{t_k}^{d_{i,1}},[v_t^{(2)}(0)(t_k)]^{d_{i,2}},\dots,[v_t^{(j-1)}(0)(t_k)]^{d_{i,j-1}}]. \end{aligned} $$

The first line follows from the definition of H (j), while the other two come from the definition of the substitution operator. Note also that we have suppressed the inputs z 1, …, z j; technically, each of \(\hat H_1^{(j)}(\mu )\) and \(\hat H_2^{(j)}(\mu )\) are j-linear maps from \(\mathbb {R}^p\) to \(\mathcal {R}\mathcal {C}\mathcal {R}\). Using assumption H.10, we can equivalently write \(v_t^{(j)}(0)(t)\) as

$$\displaystyle \begin{aligned} v_t^{(j)}(0)(t)&=\int_{-\infty}^t [U(t,\mu)\chi_0 - \varPhi_tY_c(\mu) - \varPsi_tY_u(\mu)] \hat H^{(j)}_1(\mu)d\mu \\ &- \int_t^\infty \varPsi_tY_u(\mu) \hat H^{(j)}_1(\mu)d\mu \\ &+\sum_{-\infty}^t [U(t,t_i)\chi_0 - \varPhi_tY_c(t_i) - \varPsi_tY_u(t_i)] \hat H^{(j)}_2(t_i)dt_i \\ &- \sum_t^\infty \varPsi_tY_u(t_i) \hat H^{(j)}_2(t_i)dt_i. {} \end{aligned} $$
(I.5.36)

At this stage, we remark that Theorem I.5.6.1 implies \(v_t^{(i)}(0)(\cdot )[z_1,\dots ,z_i]\in \mathcal {P}\mathcal {C}^\infty \) for i = 1, …, j − 1, while Φ t is pointwise differentiable from the right by its very definition. With these details and assumption H.3, \(\mu \mapsto \hat H_1^{(j)}(\mu )[z_1,\dots ,z_j]\) is an element of \(\mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},\mathbb {R}^n)\) for every tuple \(z_1,\dots ,z_j\in \mathbb {R}^p\). From assumption H.10, \(v_t^{(j)}(0)(t)\) is pointwise differentiable from the right if and only if the limit

$$\displaystyle \begin{aligned} \lim_{\epsilon\rightarrow 0^+}\frac{1}{\epsilon}\int_t^{t+\epsilon}U(t+\epsilon,\mu)\chi_0 \hat H_1^{(j)}(\mu)d\mu \end{aligned} $$

exists pointwise. From Eq. (I.2.15) and Lemma I.2.3.5, we can equivalently write the integral above in terms of the fundamental matrix solution:

$$\displaystyle \begin{aligned} &\int_t^{t+\epsilon}U(t+\epsilon,\mu)\chi_0 \hat H_1^{(j)}(\mu)d\mu\\ &\quad =\int_t^{t+\epsilon}\chi_{(-\infty,t+\epsilon+\theta]}(\mu)\left(I+\int_\mu^{t+\epsilon+\theta}L(\zeta)V_\zeta(\cdot,\mu)d\zeta\right)\hat H^{(j)}_1(\mu)d\mu. \end{aligned} $$

If θ < 0, then the integrand vanishes when 𝜖 < −θ. Since \(\mu \mapsto \hat H_1^{(j)}(\mu )\) is continuous from the right, we conclude that

$$\displaystyle \begin{aligned} \lim_{\epsilon\rightarrow 0^+}\frac{1}{\epsilon}\int_t^{t+\epsilon}U(t,\mu)\chi_0 \hat H_1^{(j)}(\mu)d\mu&=\chi_0 \hat H_1^{(j)}(t), \end{aligned} $$

so that \(t\mapsto v_t^{(j)}(0)(t)\) is differentiable from the right (for θ fixed), as claimed. The proof of existence of limits on the left is similar and omitted.

To get the analogous result for θ, it is worth recalling that from the fixed-point formulation, \(v_t^{(j)}(0)\) is a j-linear map from \(\mathbb {R}^p\) to \(\mathcal {P}\mathcal {C}^{\eta ,t}\). As a consequence, for all \(t\in \mathbb {R}\), θ ∈ [−r, 0] and \(z_1,\dots ,z_j\in \mathbb {R}^p\) the equation

$$\displaystyle \begin{aligned}v_t^{(j)}(0)(t)[z_1,\dots,z_j](\theta)=v_t^{(j)}(0)(t+\theta)[z_1,\dots,z_j](0)\end{aligned}$$

is satisfied. The analogous differentiability and limit results for θ therefore follow from those of t, completing the proof.

7.5 The Hyperbolic Part Is Pointwise PC 1, m-Regular at Zero

Later we will need to also consider the Taylor expansions of the hyperbolic part \(H:\mathbb {R}\times \mathbb {R}^p\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) of the centre manifold in terms of a basis array Φ, defined by

$$\displaystyle \begin{aligned} H(t,z)=(I-P_c(t))C(t,z). \end{aligned} $$
(I.5.37)

The hyperbolic part is guaranteed to be C m-smooth in z, since (I − P c(t)) is linear. To show that it is pointwise PC 1, m-regular at zero, we notice that H(t, z) = h t(z)(t), where h t(z) can be written as

$$\displaystyle \begin{aligned} h_t(z)&=(I-P_c(t))\varPhi_{(\cdot)}z + \mathcal{K}_*\circ R_{\delta,t}(v_t(z)) \end{aligned} $$

in \(\mathcal {P}\mathcal {C}^0\). However, since (I − P c(t)) is uniformly bounded, \(\mathcal {K}_*=(I-P_c(t))\mathcal {K}^\eta _t\) is well-defined as a map from η-bounded inhomogeneities into \(\mathcal {P}\mathcal {C}^{\eta ,t}\). Setting z = 0, it follows that

$$\displaystyle \begin{aligned} h_t(0)&=0,\\ h_t^{(1)}(0)(t)&=0\\ h_t^{(j)}(0)(t)&=\mathcal{K}_*\circ H^{(j)}(0,\varPhi_{(\cdot)},v_t^{(2)}(0)(\cdot),\dots,v_t^{(j-1)}(0)(\cdot)). \end{aligned} $$

On the other hand, for z ≠ 0 we have

$$\displaystyle \begin{aligned}h_t(z)(t)=\mathcal{K}_*\circ R_{\delta,t}(v_t(z))(t).\end{aligned}$$

By the same argument as in the proof of Theorem I.5.7.1, we can make the following conclusion.

Corollary I.5.7.1

The hyperbolic part H(t, z) = (I  P c(t))C(t, z) of the centre manifold in terms of the basis array Φ is pointwise PC 1, m -regular at zero. Moreover, if \(\theta \in \mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},[-r,0])\) , then tH(t, z)(θ(t)) is continuous from the right and has limits on the left for all \(z\in \mathbb {R}^p\) , and zH(t, z) is Lipschitz continuous uniformly for \(t\in \mathbb {R}\).

7.6 Uniqueness of the Taylor Coefficients

Theorem I.5.7.1 guarantees that the coefficients in the Taylor expansion

$$\displaystyle \begin{aligned} C(t,z)&=DC(t,0)z + \frac{1}{2}D^2C(t,0)[z,z]+\cdots+\frac{1}{m!}C^mC(t,0)[z,\dots,z] + O(||z||{}^{m+1}) \end{aligned} $$

are pointwise differentiable from the right and have limits on the left. However, the centre manifold \(\mathcal {C}:\mathcal {R}\mathcal {C}\mathcal {R}_c\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) used to define the representation in terms of the basis array Φ depends non-canonically on the choice of cutoff function used to define the substitution operator R δ,t. However, this cutoff function does not actually factor into the coefficients D j C(t, 0). Indeed, each of \(\mu \mapsto v^{(j)}_t(0)(\mu )\) is a sum of improper integrals and convergent series that depend only the lower-order terms \(v^{(i)}_t(0)(\cdot )\) for i < j—see equation (I.5.36)—and is independent of the cutoff function. By induction, we can see from (I.5.32)–(I.5.34) that, in fact, none of these lower-order terms depend on the cutoff function. The same arguments apply to the hyperbolic part. Since this is the only non-canonical element in the definition of the centre manifold (indeed, the renorming is only relevant outside of a small neighbourhood of \(0\in \mathcal {R}\mathcal {C}\mathcal {R}\) and so does not affect Taylor expansions), the following corollary is proven.

Corollary I.5.7.2

Let Φ be a basis array for \(\mathcal {R}\mathcal {C}\mathcal {R}_c\) . Let \(\mathcal {C}_1\) and \(\mathcal {C}_2\) be two distinct centre manifolds, and let C 1 and C 2 , respectively, be the centre manifolds with respect to the basis array Φ. Also, let H 1 and H 2 be the respective hyperbolic parts. Then, for j = 1, …, m, we have D j C 1(t, 0) = D j C 2(t, 0) and D j H 1(t, 0) = D j H 2(t, 0). That is, the Maclaurin series expansion of the centre manifold in terms of the basis array Φ is unique, as is that of the hyperbolic part.

7.7 A Discussion on the Regularity of the Matrices tY j(t)

Hypothesis H.10 introduces a technical assumption on the matrices appearing in the decomposition (I.5.35). It is our goal in this section to formally demonstrate that there is reason to suspect that this hypothesis holds generally, although proving this result would likely be difficult. We will consider only tY c(t), since the discussion for tY u(t) is the same.

When the linearization “has no delayed terms” and is spectrally separated as a finite-dimensional system, tY c(t) is automatically continuous from the right with limits on the left. Abstractly, having no delayed terms means that the functionals defining the linearization have support in the subspace \(\mathcal {R}\mathcal {C}\mathcal {R}^0=\{\chi _0 \xi : \xi \in \mathbb {R}^n\}\). Let us prove the claim. Let X(t, s) denote the Cauchy matrix associated to the linearization

$$\displaystyle \begin{aligned}\dot x&=L(t)x(t),&t&\neq t_k \end{aligned} $$
(I.5.38)
$$\displaystyle \begin{aligned}\varDelta x&=B(k)x(t^-),&t&=t_k. \end{aligned} $$
(I.5.39)

The projection P c(t) onto the associated centre fibre bundle satisfies the equation

$$\displaystyle \begin{aligned}X(t,s)P_c(s)=P_c(t)X(t,s)\end{aligned}$$

for all t ≥ s. However, since X −1(t, s) exists for all \(t,s\in \mathbb {R}\)—see Chap. 2 or the monograph [9] for the relevant background on linear impulsive differential equations in finite-dimensional spaces—we have P c(t) = X(t, 0)P c(0)X −1(t, 0) for all \(t\in \mathbb {R}\). Moreover, tX(t, 0) is continuous from the right and has limits on the left, from which it follows that the same is true for P c(t). Similarly, each of P s(t) and P u(t) is an element of \(\mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},\mathbb {R}^{n\times n})\). If we write P c(t) = Φ(t)Y c(t) for Φ(t) = X(t, 0)Φ(0) a matrix whose columns form a basis for \(\mathcal {R}\mathcal {C}\mathcal {R}_c(t)\), then the observation that the columns of Φ(t) are linearly independent implies we can write

$$\displaystyle \begin{aligned}Y_c(t)=\varPhi^+(t)P_c(t),\end{aligned}$$

where Φ +(t) is the left-inverse of Φ(t). Since the rank of tΦ(t) is constant, tΦ +(t) is continuous from the right and has limits on the left. It follows that tY c(t) is an element of \(\mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},\mathbb {R}^{p\times n})\). If (I.5.38)–(I.5.39) is now considered as an impulsive RFDE with phase space \(\mathcal {R}\mathcal {C}\mathcal {R}([-r,0],\mathbb {R}^n)\) for some r > 0, then we can write

$$\displaystyle \begin{aligned}U(t,s)\phi(\theta)=\left\{\begin{array}{ll} X(t+\theta,s)\phi(0),&t+\theta\geq s \\ \phi(t+\theta-s),&t+\theta<s. \end{array}\right.\end{aligned}$$

If one defines \(\mathcal {P}_j(t):\mathcal {R}\mathcal {C}\mathcal {R}\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) by

$$\displaystyle \begin{aligned}\mathcal{P}_j(t)\phi(\theta)=X(t+\theta,t)P_j(t)\phi(0),\end{aligned}$$

one can verify directly \(U(t,s):\mathcal {R}\mathcal {C}\mathcal {R}\rightarrow \mathcal {R}\mathcal {C}\mathcal {R}\) is spectrally separated with the triple of projectors \((\mathcal {P}_s,\mathcal {P}_c,\mathcal {P}_u)\). But then,

$$\displaystyle \begin{aligned}\mathcal{P}_c(t)\chi_0(\theta)&=X(t+\theta,t)P_c(t)=X(t+\theta,t)\varPhi(t)Y_c(t)=\varPhi(t+\theta)Y_c(t)=\varPhi_t(\theta)Y_c(t).\end{aligned} $$

We already know that tY c(t) is an element of \(\mathcal {R}\mathcal {C}\mathcal {R}(\mathbb {R},\mathbb {R}^{p\times n})\), and since this same matrix satisfies the decomposition \(\mathcal {P}_c(t)\chi _0=\varPhi _t Y_c(t)\), we are done.

In the general case, the situation is far more subtle since the projector tP c(t) is not even pointwise continuous. Consider the periodic case. \(\mathcal {R}\mathcal {C}\mathcal {R}_c(t)\) is the invariant subspace of the monodromy operator V t that contains, in particular, nontrivial elements \(\phi \in \mathcal {R}\mathcal {C}\mathcal {R}(t)\) with the property that ||V t ϕ|| = ||ϕ||. However, since V t = U(t + T, t), any such element of \(\mathcal {R}\mathcal {C}\mathcal {R}_c(t)\) will have discontinuities on the set \(D_t=\{\theta \in [-r,0] : t+\theta \in \{t_k:k\in \mathbb {Z}\}\}.\) Generally, D t is nonempty and nonconstant; the discontinuities move by translation to the left as t increases. Consequently, the discontinuities of P c(t)ϕ for fixed \(\phi \in \mathcal {R}\mathcal {C}\mathcal {R}\) are nonconstant in t, so t↦||P c(t)ϕ|| is generally discontinuous (from the right and left) at any \(t\in \mathbb {R}\) such that D t is nonempty. As such, one cannot take advantage of any regularity properties of tP c(t) even in the pointwise sense.

8 Comments

Some of the content of this chapter appears in the two papers Smooth centre manifolds for impulsive delay differential equations and Computation of centre manifolds and some codimension-one bifurcations for impulsive delay differential equations by Church and Liu, published in Journal of Differential Equations [31, 33] in 2018 and 2019, respectively. Some improvements have been made in the present monograph, however. For example, in the first of the two publications, smoothness of the centre manifold was only proven in the periodic case. The second of the two publications considers only discrete delays.

Some early results on the existence of invariant manifolds for impulsive differential equations in the infinite-dimensional context are due to Bainov et al. [11, 12], where they prove the existence of integral manifolds (subsets of the phase space consisting of entire solutions) identified as perturbations of linear invariant subspaces. Centre manifolds are not considered, however, and in this context, the linear dynamics on a given Banach space are assumed to be reversible, so in particular the restriction to the stable subspace defines an all-time process. Exponential trichotomy is assumed on the dynamics of the linear part, which is similar to what we have assumed in this chapter. Aside from these and related investigations into stable manifolds under weaker notions of hyperbolicity than exponential dichotomy—see [16] and the references cited therein—and some recent results on Lipschitz-smooth stable manifolds for impulsive delay differential equations [8], there has not been much investigation in this area.

Our proof of smoothness of the centre manifold uses formal differentiation in conjunction with Lemma I.5.6.1 on fixed points of contractions on a scale of Banach spaces. The latter technique as it applies to proving the smoothness of centre manifolds was introduced in 1987 by Vanderbauwhede and Van Gils [144]. See [44, 70, 71, 107] for a few other applications. Regularity in time of the coefficients in the Taylor expansion of the centre manifold for nonautonomous systems seems to not be as well-studied. See Theorem A.1 of [116] by Potzschë and Rasmussen for a regularity result for invariant manifolds for nonautonomous ordinary differential equations, and the references cited therein for relevant proof methodologies.