1 Introduction

In this paper we study a particular version of the equation

$$\begin{aligned} x'(t) = f\big (x(t - h(x_t))\big ), \end{aligned}$$
(1)

where \(f\) is continuous with negative feedback and \(h: C([-1,0],\mathbb {R}) \rightarrow \mathbb {R}\) is a continuous delay functional.

Our goal is to exhibit pairs \(h\) and \(f\) such that

  1. (i)

    the solutions of Eq. (1) have a non-increasing “oscillation speed”;

  2. (ii)

    Equation (1) has a periodic solution that is both “rapidly oscillating” and stable.

Our motivation, of course, is comparison with the extensively studied constant-delay equation with negative feedback

$$\begin{aligned} x'(t) = f\big (x(t - 1)\big ). \end{aligned}$$
(2)

The oscillation speed for this latter equation—roughly speaking, the number of zeros a solution has per unit time interval—has been described by many authors (often in somewhat more general contexts: see, e.g., [9]) and is a basic tool for understanding the equation’s global dynamics. In particular, there is an invariant set of solutions that are “eventually slowly oscillating”—that is, whose successive zeros are eventually separated by more than one unit time. When \(f\) is strictly decreasing and smooth, the set of initial conditions of eventually slowly oscillating solutions of Eq. (2) is dense in the phase space \(C([-1,0],\mathbb {R})\). (This result was conjectured in [4] and is proven in [11]; see [14] for an earlier proof under stronger hypotheses.) In particular, when \(f\) is strictly decreasing and smooth no rapidly oscillating periodic solution of Eq. (2) can be stable. The question of whether stable rapidly oscillating periodic solutions of Eq. (2) can exist when \(f\) is not monotonic remains open.

It is natural to ask, then, what alterations of Eq. (2) might admit rapidly oscillating periodic solutions, still worthy of the name, that are stable. For example, instances have been found of constant-delay equations with instantaneous damping (and non-monotonic \(f\))

$$\begin{aligned} x'(t) = \mu x(t) + f\big (x(t-1)\big ) \end{aligned}$$

that have stable rapidly oscillating periodic solutions ([3, 13]). In [6] a family of state-dependent delay equations of the form

$$\begin{aligned} x'(t) = f\Big (x\big (t - \triangle (x(t))\big )\Big ) \end{aligned}$$
(3)

is described for which \(f\) is nonincreasing and for which the instability of particular rapidly oscillating periodic solutions (as measured by the spectral radius of the derivative of an appropriate Poincaré map) can be made arbitrarily weak. (State-dependent equations of the form (3), and various of its generalizations, are relatively well-studied. Results include [8, 10] on existence of slowly oscillating periodic solutions, and [7] on oscillation speed and the structure of the set of slowly oscillating solutions).

In this paper we modify the equations studied in [6] in two particular ways chosen to circumvent the apparent barriers to the solutions considered in [6] being stable. In the first place, we allow the delay to depend on somewhat more of the initial condition than its current value only, so the equation we actually study here is of the form

$$\begin{aligned} x'(t) = f\Big (x\big (t-d(x(t),x(t-r))\big )\Big ), \end{aligned}$$
(4)

where \(r \in (0,1)\) (we shall see that we can take \(r\) to be fairly small). In the second place, we allow the delayed time (the quantity \(t - h(x_t)\) in the notation of Eq. 1) to decrease over certain intervals. (For several different types of equations of the form (1), various authors have either imposed the condition that \(t - h(x_t)\) be increasing, or had this condition emerge as a consequence of other hypotheses; see, for example, [1, 6, 7, 16]. See especially [17] for a discussion of the monotonicity of \(t - h(x_t)\) for a well-motivated subclass of state-dependent delay equations).

We shall exhibit an instance of Eq. (4) that has a non-increasing oscillation speed and a rapidly oscillating periodic solution that is asymptotically stable. We shall see that we can take \(f\) to be non-increasing. The equation we shall study is admittedly contrived; it is best regarded as a smoothed and state-dependent alteration of the well-studied prototype equation

$$\begin{aligned} x'(t) = -\hbox {sign}\big (x(t-1)\big ), \end{aligned}$$
(5)

and the solution we shall study is best regarded as an analog of the periodic solution \(p(t)\) of Eq. (5) whose zeros are separated by \(2/5\). This solution (the dashed line) and a solution \(x(t)\) of Eq. (5) with a nearby initial condition (solid line) are shown below. We assume that \(x\) has precisely two negative zeros \(-u_2 - u_1\) and \(-u_1\) that are close to \(-4/5\) and \(-2/5\), respectively.

figure a

The reason for the instability of \(p(t)\) is not hard to grasp. Let us write \(z\) for the first positive zero of \(x(t)\). For \(x_0\) close enough to \(p_0\), the zero \(z\) (indeed, the entire restriction of \(x\) to \([0,z]\)) is completely determined by \(-u_2 - u_1\). More particularly, \(z\) is given by the formula

$$\begin{aligned} z = 2\big [1 - (u_1 + u_2)\big ]. \end{aligned}$$

If \(x_0\) is close enough to \(p_0\) and has the two negative zeros we have assumed, the same conditions we have just assumed for \(x_0\) will hold for \(x_z\) also, and the negative zeros of \(x_z\) are \(-u_1 - z\) and \(-z\). More generally, as long as \(x(t)\) stays close enough to \(p(t)\) the map that “advances \(x_t\) by one zero” is semiconjugate to the two-dimensional affine map

$$\begin{aligned} (u_1,u_2) \mapsto (z,u_1) = \Big (2\left[ 1 - (u_1 + u_2)\right] ,u_1\Big ) \end{aligned}$$

with fixed point \((2/5,2/5)\). This fixed point, however, is repelling—and so, for the solution \(x\) pictured above, as time increases we shall see the spacing between successive zeros of \(x\) deviating more and more from \(2/5\) until \(x\) actually becomes slowly oscillating. Loosely speaking, our objective is to “stabilize” \(p(t)\) by altering Eq. (5) so that the map \((u_1,u_2) \mapsto (z,u_1)\) acquires an asymptotically stable fixed point. More broadly, we shall choose \(f\) and \(d\) carefully so that, in a small neighborhood about our periodic solution \(p(t)\) of interest, solution trajectories are characterized (or eventually characterized) by the convergent orbit of a finite-dimensional map to which an appropriate Poincaré map is semiconjugate.

This idea of exploiting a semiconjugacy between a Poincaré map and a finite-dimensional map is an old one, and has been used by several authors to exhibit phenomena for special delay equations whose general forms are not tractable. A relatively general discussion of how to assess stability of periodic solutions of delay equations when such a semiconjugacy is available is presented in [5]. In the present paper, though, we will be able to prove the asymptotic stability of our periodic solution with a minimum of formal machinery.

2 Existence, Uniqueness, and Oscillation Speed

For general theory on state-dependent delay equations, we refer to the survey article [2].

We shall assume that Eq. (4) satisfies the following hypotheses:

  1. (i)

    \(f : \mathbb {R}\rightarrow \mathbb {R}\) is continuous and globally Lipschitz with Lipschitz constant \(c\);

  2. (ii)

    \(|f(x)| \le 1\) for all \(x\);

  3. (iii)

    \(d: \mathbb {R}^2 \rightarrow [0,1]\) is continuous and globally Lipschitz with Lipschitz constant \(L\) (with respect to the sup norm on \(\mathbb {R}^2\));

  4. (iv)

    \(r \in (0,1)\).

In the usual way we write \(C = C([-1,0],\mathbb {R})\) for the Banach space of real-valued continuous functions on \([-1,0]\), equipped with the sup norm; if \(x\) is any real-valued continuous function whose domain includes the interval \([t-1,t]\) we write \(x_t\) for the member of \(C\) given by

$$\begin{aligned} x_t(s) = x(t+s), \quad s \in [-1,0]. \end{aligned}$$

We write \(K\) for the closed subset of \(C\) consisting of Lipschitz continuous functions with Lipschitz constant at most \(M \ge 1\). Notice that, if \(x\) and \(y\) are in \(K\), we have

$$\begin{aligned}&\Big |f\big (x(-d(x(0),x(-r)))\big ) - f\big (y(-d(y(0),y(-r)))\big )\Big | \\&\quad \le c \Big |x\big (-d(x(0),x(-r))\big ) - y\big (-d(y(0),y(-r))\big )\Big | \\&\quad \le c \Big [ \big |x(-d(x(0),x(-r))) - x(-d(y(0),y(-r)))\big |\\&\qquad +\;\big |x(-d(y(0),y(-r))) - y(-d(y(0),y(-r)))\big | \Big ] \\&\quad \le c \Big [ M\big |d(x(0),x(-r)) - d(y(0),y(-r))\big | + \Vert x - y\Vert \Big ] \\&\quad \le c \big (M L + 1\big ) \big \Vert x - y\Vert . \end{aligned}$$

Thus the map \(K \ni x \mapsto f((x(-d(x(0),x(-r)))) \in \mathbb {R}\) is Lipschitz.

If \(x: [-1,\infty ) \rightarrow \mathbb {R}\) is a function for which \(x_0 \in K\) and \(x'(t)\) satisfies Eq. (4) for all \(t > 0\), we call \(x\) a continuation of \(x_0\) as a solution of Eq. (4).

Proposition 2.1

(Existence, uniqueness, and continuous dependence in \(K\) for Eq. (4)) Any \(x_0 \in K\) has a unique continuation \(x: [-1,\infty ) \rightarrow \mathbb {R}\) as a solution of Eq. (4). \(x\) is differentiable for all \(t > 0\), and \(x_t \in K\) for all \(t \ge 0\).\(\square \)

Moreover, the solution semiflow \(T: \mathbb {R}_+ \times K \rightarrow K\) for Eq. (4) is continuous in the sense that, given any \(x_0 \in K\), \(\epsilon > 0\), and \(\tau _0 > 0\), there exists a \(\delta > 0\) such that \(\Vert y_0 - x_0\Vert < \delta \) (where \(y_0 \in K\)) implies that \(\Vert T(t,y_0) - T(t,x_0)\Vert < \epsilon \) for all \(t \in [0,\tau _0]\).

Proof

Since \(x \mapsto f((x(-d(x(0),x(-r))))\) is Lipschitz, existence and uniqueness of solutions, and continuous dependence on initial conditions, are standard. \(x_t \in K\) for all \(t \ge 0\) since \(\Vert x'(t)\Vert \le 1 \le M\) for all \(t > 0\), by hypothesis (ii). \(\square \)

In the present paper we shall content ourselves with working in the phase space \(K\). With additional smoothness hypotheses on \(f\) and \(d\) we could use the “\(C^1\)-solution framework” developed in [15] and presented also in [2].

We now turn to oscillation speed. We add the additional hypotheses that

  1. (v)

    \(uf(u) < 0\) for all \(u \ne 0\) (negative feedback);

  2. (vi)

    There is some \(\gamma _1 > 0\) such that \(d(u,v) = 1\) when \(|u| \le \gamma _1\).

Let us write \(\hat{K}\) for the subset of initial conditions in \(K\) that have only finitely many zeros. Given a solution \(x\) of Eq. (4) with \(x_t \in \hat{K}\), let us define

$$\begin{aligned} \tau (x_t) = \inf \big (s \ge t \ : \ x(s) = 0\big ) \end{aligned}$$

(where \(\tau (x_t) = \infty \) if the above infimum does not exist). We now define the following oscillation speed for \(x_t\)—it is essentially the familiar one.

$$\begin{aligned} \omega (x_t) = \left\{ \begin{array}{cc} \hbox { number of zeros of } x \hbox { on }[\tau (x_t)-1,\tau (x_t)], &{} \quad \tau (x_t) < \infty ; \\ 1, &{} \quad \tau (x_t) = \infty . \end{array} \right. \end{aligned}$$

Proposition 2.2

Suppose that hypotheses (i–vi) hold, that \(x : [-1,\infty ) \rightarrow \mathbb {R}\) is a solution of Eq. (4), and that \(x_t \in \hat{K}\), \(t \ge 0\). Then

  1. (a)

    \(x_s \in \hat{K}\) for all \(s \ge t\); and

  2. (b)

    \(\omega (x_s) \le \omega (x_t)\) for all \(s \ge t\).

Proof

If \(t \le s \le \tau (x_t)\), we clearly have \(x_s \in \hat{K}\), \(\tau (x_s) = \tau (x_t)\), and \(\omega (x_s) = \omega (x_t)\). Let us write \(z = \tau (x_t)\).

Since \(x_z\) has only finitely many zeros, there is some \(\epsilon \in (0,\gamma _1)\) (here \(\gamma _1\) is as in hypothesis vi) such that \(x\) is strictly of one sign on \((z-1,z-1 + \epsilon )\); since \(|x'(s)| \le 1\) for all \(s > 0\), \(|x(s)| < \gamma _1\) for \(s \in (z,z+\epsilon )\) and so \(d(x(s),x(s-r)) = 1\) for all such \(s\). Thus \(x'\) is strictly of one sign on \((z,z+\epsilon )\). Let us suppose, for the sake of definiteness, that \(x\) is strictly negative on \((z-1,z-1+\epsilon )\) and so that \(x' > 0\) on \((z, z+\epsilon )\).

Since \(x\) is continuous its set of zeros is closed; it follows that either \(x\) has no zeros after \(z\) (in which case both parts of the proposition hold) or that there is a well-defined first zero \(z' > z\). Observe that \(x_s \in \hat{K}\) for all \(s \in (z,z']\); we wish to show that \(\omega (x_s) \le \omega (x_z) = \omega (x_t)\) for all such \(s\). This is the same as saying that \(\omega (x_{z'}) \le \omega (x_z)\), which is in turn the same as saying that \([z'-1,z']\) does not contain more zeros of \(x\) than does \([z-1,z]\).

The only zero of \(x\) that might lie in \([z'-1,z']\) but not in \([z-1,z]\) is \(z'\) itself; to prove that \(\omega (x_{z'}) \le \omega (x_z)\), then, it suffices to show that there is some zero of \(x\) in \([z-1,z]\) that is not in \([z'-1,z']\).

An argument similar to that above shows that there is some \(\epsilon ' \in (0,\gamma _1)\) such that \(x\) is strictly of one sign on \((z'-1-\epsilon ', z' - 1)\), and that consequently \(x'(t)\) must also be strictly of one sign on \((z'-\epsilon ',z')\). Since there are no zeros of \(x\) between \(z\) and \(z'\), the sign of \(x'(t)\) must be negative on this latter interval and so the sign of \(x\) must be positive on \((z'-1-\epsilon ', z' - 1)\). This shows that there must be a zero of \(x\) on \([z-1 + \epsilon , z'-1 - \epsilon ']\). Let us write \(\tilde{z}\) for the first zero of \(x\) greater than or equal to \(z - 1 + \epsilon \). Either \(\tilde{z} = z\) or \(\tilde{z} < z\); either way, \(\tilde{z}\) is a zero of \(x\) that is on the interval \([z-1,z]\) and that is not on \([z'-1,z']\). We conclude that \(\omega (x_{z'}) \le \omega (x_z)\).

Since \(x_{z'} \in \hat{K}\), an argument just like that above shows that \(z'\) is an isolated zero of \(x\). Proceeding inductively we conclude that, if \(x_t \in \hat{K}\), then the subsequent zeros of \(x\) form an increasing sequence \(t < z_1 < z_2 < z_3 < \cdots \) (perhaps finite) of isolated points with

$$\begin{aligned} \omega \big (x_{z_{k+1}}\big ) \le \omega \big (x_{z_k}\big ). \end{aligned}$$

These zeros cannot have a finite accumulation point, since if they did \([z_k - 1,z_k]\) would eventually contain a large number of zeros of \(x\). We conclude that the sequence of zeros is either finite or approaches \(\infty \). The proposition now follows. \(\square \)

We are now in a position to state our main Theorem.

Theorem 2.3

(A stable rapidly oscillating periodic solution for Eq. (4)). There are instances of Eq. (4) satisfying hypotheses (i–vi), including instances where \(f\) is nonincreasing, such that Eq. (4) has a periodic solution \(p\) satisfying the following conditions.

  1. (a)

    The zeros of \(p\) occur at integer multiples of \(z_*\), where \(2z_* < 1 < 3z_*\) (in particular, \(p\) has constant oscillation speed \(3\)).

  2. (b)

    \(p\) is asymptotically stable in the following sense: there is a neighborhood \(U\) about \(p_0\) in \(K\) such that, given any \(x_0 \in U\) with continuation \(x\) as a solution of Eq. (4), there is an infinite increasing sequence \(z_1 < z_2 < z_3 < \cdots \) of positive numbers such that \(x_{z_k} = 0\) for all \(k\), \(|z_k - z_{k+1}| \rightarrow z_*\) as \(k \rightarrow \infty \), and \(x_{z_{2k}} \rightarrow p_0\) as \(k \rightarrow \infty \).

3 The Example Equation

We now introduce the much more specific version of Eq. (4) that we will work with. In particular, we assume the following additional hypotheses.

  1. (vii)

    \(f\) is odd, and there is an \(\alpha > 0\) such that \(f(x) = -\hbox {sign}(x)\) for all \(|x| \ge \alpha \).

  2. (viii)

    There are numbers \(\gamma _2\), \(\gamma \), \(\kappa \) and \(m\) such that the following hold: \(r< \gamma _1 < \gamma _2 < \gamma < \kappa \); \(m > 1\); and \(1 - m(\kappa - (\gamma - r)) \ge 0\) (here \(\gamma _1\) is as in hypothesis vi)).

  3. (ix)

    \(d(x(t),x(t-r)) = g(x(t-r))\) for \(|x(t)| \ge \gamma _2\), where \(g\) is even and satisfies

    $$\begin{aligned} g(u) = \left\{ \begin{array}{ll} 1, &{} \quad u \in [0,\gamma -r]; \\ 1 - m(u - (\gamma - r)); &{} \quad u \in [\gamma - r,\kappa ]; \\ 1 - m(\kappa - (\gamma - r)), &{} \quad u \ge \kappa . \end{array}\right. \end{aligned}$$

    Also, \(d(x(t),x(t-r)) \in [g(x(t-r)),1]\) for all \((x(t),x(t-r))\), and \(d(-x(t),-x(t-r)) = d(x(t),x(t-r))\) for all \((x(t),x(t-r))\).

We emphasize especially the requirement that \(m > 1\), which will play a crucial role below. The requirement that \(1 - m(\kappa - (\gamma - r)) \ge 0\) ensures that \(d \in [0,1]\). Observe that hypotheses (i–ix) can be satisfied with both non-monotonic and monotonic (though not strictly monotonic) feedback functions \(f\).

One consequence of the evenness of \(d\) and \(g\), and the oddness of \(f\), is the following.

Lemma 3.1

Suppose that hypotheses (i–ix) are satisfied, and write \(T: \mathbb {R}_+ \times K \rightarrow K\) for the solution semiflow of Eq. (4). Then \(T(t,-x_0) = -T(t,x_0)\) for all \(t \ge 0\) and all \(x_0 \in K\).

We now consider an initial condition \(x_0 \in K\) satisfying the following assumptions, which we shall refer to collectively as \((I)\). Throughout, \(\alpha \) is as in hypothesis vii).

  • \(x(0) = 0\);

  • \(x\) has a smallest zero \(-\zeta < 0\) on \([-1,0]\); \([-\zeta - \alpha ,-\zeta + \alpha ] \subset (-1,0)\); and \(x\) has constant slope \(1\) on \([-\zeta - \alpha ,-\zeta + \alpha ]\) and on \([-\alpha ,0]\).

  • \(x(t) \le - \alpha \) on \([-1,-\zeta - \alpha ]\).

  • There is some \(\ell > 0\) such that \(x(t) \ge \alpha \) for all \(t \in [-\zeta + \alpha ,-\zeta + \alpha + \ell ]\).

The following figure illustrates an initial condition \(x_0\) satisfying the assumptions \((I)\), and some of its continuation. The notations \(\tau \) and \(z\) will be explained later.

figure b

We now assume the following conditions, where

$$\begin{aligned} \tilde{\tau } = \frac{1 + m\gamma -\zeta -\alpha }{1+m}. \end{aligned}$$
$$\begin{aligned}&(C1) \quad 0 < \alpha < r < \gamma _1 < \gamma _2 < \gamma < \kappa , \ \hbox {and} \ m > 1, \ \hbox {and} \ 1 - m(\kappa - (\gamma - r)) \ge 0; \\&(C2) \quad \gamma - 1 < -\zeta - \alpha ; \\&(C3) \quad \tilde{\tau } < \kappa ; \\&(C4) \quad 2r < \tilde{\tau } - \gamma ; \\&(C5) \quad \tilde{\tau } + r - 1 + m(\tilde{\tau } - \gamma ) + mr + (2 + m)\alpha /(m+1) < -\zeta + \ell ; \\&(C6) \quad \gamma > \gamma _2 + 2r; \\&(C7) \quad (m-1)(\tilde{\tau } - \gamma ) < 2r - 2\alpha + 2\alpha /(m+1); \\&(C8) \quad 2\tilde{\tau } + \alpha - 1 < -\zeta + \ell . \end{aligned}$$

Note that (C1) just expresses conditions we have already imposed, along with the additional requirement that that \(\alpha < r\).

We first establish that it is possible to satisfy all of these conditions.

Lemma 3.2

There are choices of \(r\), \(\gamma _1\), \(\gamma _2\), \(\gamma \), \(\kappa \), \(m\), and \(\alpha \) such that conditions (C1) through (C8) are all satisfied for all \(\zeta \) in some open interval about \(\zeta _* = 2z_*\) and all \(\ell \) in some open interval about \(\ell _* = z_* - 2\alpha \), where

$$\begin{aligned} z_* = \frac{2 + 2m\gamma }{5 + m}. \end{aligned}$$

Furthermore, \(m\) and \(\gamma \) can be chosen such that

$$\begin{aligned} 0 < z_* - 2\alpha < 2z_* + 2\alpha < 1 < 3z_* - 2\alpha . \end{aligned}$$
(6)

We shall use condition (6) in Sect. 4.

Proof

Since the set of parameters for which conditions (C1) through (C8) hold is open, it is enough to show that there are choices of \(r\), \(\gamma _1\), \(\gamma _2\), \(\gamma \), \(\kappa \), and \(m\) such that conditions (C1) through (C8) (except of course for the positivity of \(\alpha \)) hold when we take \(\zeta = \zeta _*\), \(\ell = \ell _*\), and \(\alpha = 0\). Direct computation shows that this occurs when we take (for example)

$$\begin{aligned} r = 1/100, \ \gamma _1 = 1/20, \ \gamma _2 = 1/10, \ \gamma = 1/6, \ \kappa = 1/4, \ m = 3/2. \end{aligned}$$

With these choices, we get \(z_* \approx 0.385\), so with a sufficiently small \(\alpha \) the bounds in (6) are satisfied. \(\square \)

We henceforth assume that conditions (C1) through (C8), as well as (6), hold. Let us now choose an initial condition \(x_0 \in K\) satisfying the assumptions \((I)\) described above. We write \(x\) for the continuation of \(x_0\) as a solution of Eq. (4), and \(z\) for the first positive zero of \(x\). For \(t \ge 0\), we also write

$$\begin{aligned} y(t) = x\big (t - d(x(t),x(t-r))\big ). \end{aligned}$$

The following proposition is the key result of the paper.

Proposition 3.3

With notation as above, the following hold.

  1. (i)

    \(x|[0,z]\) is completely determined by \(\zeta \), and depends continuously on \(\zeta \).

  2. (ii)

    \(z\) is given by the formula

    $$\begin{aligned} z = 2\tilde{\tau } + \frac{2\alpha }{m+1}, \end{aligned}$$

    where \(\tilde{\tau }\) is as above.

  3. (iii)

    \(x'(t) = 1\) on \((0,\alpha )\), \(x'(t) = -1\) on \((z - \alpha ,z)\), \(x(t) \ge \alpha \) on \([\alpha ,z-\alpha ]\), and \(x(z - 1) \ge \alpha \).

The reader may find it helpful to refer to the above figure while reading the following proof. Also, for a first reading the essential ideas are unchanged, and the calculations slightly simpler, if the reader supposes that \(\alpha = 0\) and that \(\gamma _1 = \gamma _2\) (these simplifications make \(f\) and \(d\) non-continuous).

Proof

By our assumptions on \(d\) and the fact that \(\Vert x'(t)\Vert \le 1\), \(d(x(t),x(t-r)) = 1\) at least for all \(t \in [0,\gamma _1]\) and so, by assumptions \((I)\) and conditions \((C1)\) and \((C2)\), \(y(t) \le -\alpha \) and \(x'(t) = 1\) for all such \(t\). Since \(\alpha < \gamma _1\) by condition \((C1)\), the first part of point iii) of the proposition is established.

Let us write

$$\begin{aligned} \tau = \inf \big \{t \ge 0 \ : \ t - d(x(t),x(t-r)) = -\zeta - \alpha \ \big \}. \end{aligned}$$

For all \(t \in (0,\tau ]\), \(x'(t) = 1\) and so, for such \(t\), \(x(t) = t\). Since \(r < \gamma _1\) by (C1), we have \(x(t-r) = t-r\) for all \(t \in [\gamma _1,\tau +r]\). By hypothesis ix), then, for \(t \in [0,\min (\tau ,\gamma )]\) we have \(d(x(t),x(t-r)) = 1\). Since \(\gamma - 1 < -\zeta - \alpha \) by (C2), \(\tau \) cannot be less than or equal to \(\gamma \); we therefore have

$$\begin{aligned} d\big (x(\gamma ),x(\gamma - r)\big ) = 1 \ \ \hbox {and} \ \ \gamma - d\big (x(\gamma ),x(\gamma -r)\big ) < -\zeta -\alpha \ \ \hbox {and} \ \ \tau > \gamma . \end{aligned}$$

We have already established that \(x(t-r) = t-r\) for all \(t \in [\gamma _1,\tau +r]\). Since \(|x'(s)| \le 1\) for all \(s\) and \(\tau - r > \gamma - r > \gamma _2\) by (C6), we have that \(|x(t)| \ge \gamma _2\) for all \(t \in [\gamma _2,\tau + r]\). Combining these two observations yields that

$$\begin{aligned} d(x(t),x(t-r)) = g(x(t-r)) = g(t - r) = 1 - m(t - r - (\gamma - r)) = 1 - m(t - \gamma ) \end{aligned}$$

for all \(t \in [\gamma ,\min (\tau +r,\kappa +r)]\).

Recall that we are writing

$$\begin{aligned} \tilde{\tau } = \frac{1 + m\gamma -\zeta -\alpha }{1+m}. \end{aligned}$$

Direct computation now shows that \(\tilde{\tau }\) is the unique solution in \(\tau \) of

$$\begin{aligned} \tau - (1 - m(\tau - \gamma )) = -\zeta - \alpha . \end{aligned}$$

Since \(\tilde{\tau } < \kappa \) by condition (C3), we see that in fact \(\tau = \tilde{\tau }\)—that is, that

$$\begin{aligned} \tau = \frac{1 + m\gamma -\zeta -\alpha }{1+m}. \end{aligned}$$

The above expression for \(d(x(t),x(t-r))\) can also be made more specific:

$$\begin{aligned} d\big (x(t),x(t-r)\big ) = 1 - m(t - \gamma ) \quad \hbox {for all} \ t \in [\gamma , \tau +r]. \end{aligned}$$
(7)

Now, as long as \(t - d(x(t),x(t-r)) \in [-\zeta -\alpha ,-\zeta +\alpha ]\) and \(t \in [\tau ,\tau +r]\), we have \(x'(t - d(x(t),x(t-r))) = 1\) and so

$$\begin{aligned} \frac{d}{dt}y(t) = \frac{d}{dt}\big (t - d(x(t),x(t-r))\big ) = 1 - \frac{d}{dt}d\big (x(t),x(t-r)\big ) = 1+m. \end{aligned}$$

Since \(\alpha < r\) and \(m > 1\) by \((C1)\), we certainly have \(2\alpha /(m+1) < r\) and so we have that

$$\begin{aligned} y\big (\tau + 2\alpha /(m+1)\big ) = \alpha . \end{aligned}$$

More generally, on the interval \([\tau ,\tau + 2\alpha /(m+1)]\), \(x(t)\) satisfies the second-order ODE

$$\begin{aligned} x'(t) = f(y(t)), \ y'(t) = m+1; \ \ \ x(\tau ) = \tau , \ y(\tau ) = -\alpha . \end{aligned}$$

Since \(f\) is odd, we conclude that the graph of \(x(t)\) forms a symmetric arc on this interval, and that \(x(\tau + 2\alpha /(m+1)) = \tau \). Up to translation, this arc does not depend on \(\zeta \)—or anything about \(x_0\) except that it satisfies assumptions \((I)\).

The rest of the proof consists in proving the following claim: that \(y(t) \ge \alpha \) for all

$$\begin{aligned} t \in \big [\tau + 2\alpha /(m+1),\tau + 2\alpha /(m+1) + \tau \big ]. \end{aligned}$$

For then we will have that \(x'(t) = -1\) for all such \(t\), and the formula for \(z\) (and the rest of the proposition as well) will follow. In particular, the restriction of \(x\) to \([0,z]\) depends completely (and continuously) on \(\tau \), which in turn depends continuously on \(\zeta \).

As \(t\) moves from \(\tau \) to \(\tau + 2\alpha /(m+1) + 2r\), \(x(t)\) moves from \(\tau \) to at least \(\tau - 2r\) (since \(|x'(t)| \le 1\)). Since \(\tau - 2r > \gamma \) by condition \((C4)\), for all \(t\) in this interval the value of \(d(x(t),x(t-r))\) is completely determined by the value of \(x(t-r)\).

At time \(t = \tau \), we have \(x(\tau - r) = \tau - r\) and

$$\begin{aligned} d\big (x(\tau ),x(\tau - r)\big ) = 1 - m(\tau - \gamma ). \end{aligned}$$

As already explained (recall (7)), as \(t\) moves from \(\tau \) to \(\tau +r > \tau + 2\alpha /(m+1)\), \(d(x(t),x(t-r))\) decreases with derivative \(-m\), \(t - d(x(t),x(t-r))\) increases with derivative \(1+m\), and we have

$$\begin{aligned} d\big (x(\tau +r), x(\tau )\big ) = 1 - m(\tau - \gamma ) - mr. \end{aligned}$$

This tells us that \(t - d(x(t),x(t-r)) \ge -\zeta + \alpha \) for \(t \in [\tau + 2\alpha /(m+1), \tau + r]\). Furthermore, we have

$$\begin{aligned} \tau + r - d\big (x(\tau + r),x(\tau )\big ) = \tau + r - 1 + m(\tau - \gamma ) + mr, \end{aligned}$$

which is less than \(-\zeta + \ell \) by (C5), so we actually have that \(y(t) \ge \alpha \) and \(x'(t) = -1\) on \([\tau + 2\alpha /(m+1), \tau +r]\).

Now, as \(t\) moves from \(\tau + r\) to \(\tau + r + 2\alpha /(m+1)\), \(x(t-r)\) travels a symmetric arc from value \(\tau \), up to a maximum value less than \(\tau + \alpha /(m+1)\), and then back down to \(\tau \). In this interval, since \(d(x(t),x(t-r)) = g(x(t-r))\) and \(g\) is nonincreasing on the positive real line, we have that \(d(x(t),x(t-r)) \le d(x(\tau +r),x(\tau ))\). Therefore we certainly have that \(t - d(x(t),x(t-r)) > -\zeta + \alpha \) on this interval. Very crudely, the largest that \(t - d(x(t),x(t-r))\) can be on this interval is obtained by taking the maximum value of \(t\) and subtracting the smallest possible value of \(g(x(t-r))\): therefore, for \(t \in [\tau +r,\tau +r + 2\alpha /(m+1)]\), we have

$$\begin{aligned}&t - d\big (x(t),x(t -r) \big ) \le \tau + r + 2\alpha /(m+1) - g\big (\tau + \alpha /(m+1)\big ) \\&\quad \le \tau + r + 2\alpha /(m+1) - 1 + m(\tau - \gamma ) + mr + m\alpha /(m+1). \end{aligned}$$

This is less than \(-\zeta + \ell \) by (C5). Thus we actually have that \(y(t) \ge \alpha \) and that \(x'(t) = -1\) on \([\tau + r, \tau + r + 2\alpha /(m+1)]\), and hence on \([\tau + 2\alpha /(m+1), \tau + 2\alpha /(m+1) + r]\).

Now we consider the interval \([\tau + 2\alpha /(m+1) + r, \tau + 2\alpha /(m+1) + 2r]\). \(d(x(t),x(t-r))\) is still completely determined by \(x(t-r)\) on this interval, and in fact \(x(t-r)\) moves with derivative \(-1\) from \(\tau < \kappa \) down to \(\tau - r > \gamma \). Thus, on this interval, \(d(x(t),x(t-r))\) has derivative \(m\) and \(t - d(x(t),x(t-r))\) has derivative \(1 - m\). The delayed time \(t - d(x(t),x(t-r))\) is actually moving backwards here. It is therefore clear that \(t - d(x(t),x(t-r)) \le -\zeta + \ell \) for all \(t\) in this interval. To ensure that \(t - d(x(t),x(t-r)) \ge -\zeta + \alpha \) for all \(t\) in this interval, we just have to check that the inequality holds at the rightmost endpoint \(\tau + 2\alpha /(m+1) +2r\). Since the value of \(x(t-r)\) at \(t = \tau + 2\alpha /(m+1) + 2r\) is \(\tau - r\), and

$$\begin{aligned} g(\tau - r) = 1 - m(\tau - \gamma ), \end{aligned}$$

and \(\tau - (1 - m(\tau - \gamma )) = -\zeta - \alpha \) (remember how we derived \(\tau \) in the first place), we have that

$$\begin{aligned} t - d\big (x(t),x(t-r)\big )&= \tau + 2\alpha /(m+1) + 2r - \big (1 - m(\tau - \gamma )\big ) \\&= -\zeta - \alpha + 2r+ 2\alpha /(m+1) \end{aligned}$$

for \(t = \tau + 2\alpha /(m+1) + 2r\). The last number above is certainly greater than \(-\zeta + \alpha \) since \(r > \alpha \).

Let us write \(\tau _2 = \tau + 2\alpha /(m+1) + 2r\). We have proven that \(y(t) \ge \alpha \) and that \(x'(t) = -1\) for all \(t \in [\tau + 2\alpha /(m+1),\tau _2]\). Note that \(x(\tau _2) = \tau - 2r\). We are about to use the just-established equality

$$\begin{aligned} \tau _2 - d\big (x(\tau _2),x(\tau _2-r)\big ) = -\zeta - \alpha + 2r+ 2\alpha /(m+1). \end{aligned}$$
(8)

Consider the interval \([\tau _2, \tau _2 + (\tau - \gamma )]\). There are two possibilities: either

$$\begin{aligned} -\zeta + \alpha < t - d(x(t),x(t-r)) < -\zeta + \alpha + \ell \end{aligned}$$
(9)

for all \(t\) in this interval, or not. Imagine not, and let \(t_*\) be first time on the interval where the inequality fails. Then, on \([\tau _2,t_*]\), we have that \(x(t)\) has derivative \(-1\) and value no less than \(\gamma - 2r > \gamma _2\) (see condition \((C6)\)). Thus \(d(x(t),x(t-r))\) is still given by \(g(x(t-r))\) on \([\tau _2,t_*]\) and the delayed time \(t - d(x(t),x(t-r))\) has derivative \(1 - m < 0\). Thus only the first inequality in (9) can fail, so we are imagining that \(t_* - d(x(t_*),x(t_*-r)) = -\zeta +\alpha \). Since \((t_* - \tau _2) \le (\tau - \gamma )\), then, we must have

$$\begin{aligned} (m-1)(\tau - \gamma )&\ge \tau _2 - d\big (x(\tau _2),x(\tau _2-r)\big ) - (-\zeta + \alpha ) \\&= -\zeta - \alpha + 2r + 2\alpha /(m+1) + \zeta - \alpha \\&= 2r - 2\alpha + 2\alpha /(m+1) \end{aligned}$$

— but this contradicts condition (C7). We conclude that \(x'(t) = 1\) and \(y(t) \ge \alpha \) throughout the interval \([\tau _2, \tau _2 + (\tau - \gamma )]\). Note that \(x(\tau _2 + \tau - \gamma ) = \gamma - 2r > \gamma _2\) (again, condition (C6)) and that \(x(\tau _2 + \tau - \gamma - r) = \gamma - r\).

From time \(t = \tau _2 + (\tau - \gamma )\) to time \(t = \tau _2 + \tau - 2r\), the delay is equal to \(1\) (and \(x'(t) = -1\)) as long as \(y(t) \ge \alpha \) (since \(x(t-r) \in [0,\gamma - r]\) for all such \(t\)). Since we have already shown that \(t - d > -\zeta + \alpha \) for \(t = \tau _2 + (\tau - \gamma )\), we certainly have that \(t - d > -\zeta + \alpha \) on this interval. Finally, since \(2\tau + \alpha - 1 < -\zeta + \ell \) by \((C8)\), we certainly have \(2\tau + 2\alpha /(m+1) - 1 < -\zeta + \ell \). This completes the proof. \(\square \)

The figure below shows a numerically approximated solution of the kind described in the above proposition, where the parameters \(m\), \(\gamma _1\), \(\gamma _2\), \(\gamma \), \(\kappa \) are as in the proof of Lemma 3.2, and \(\alpha = 1/200\). The dashed horizontal lines about the \(t\)-axis are at heights \(\alpha \) and \(-\alpha \). The thicker line is \(x(t)\); the other line is \(y(t)\).

figure c

Remark 3.4

Though we have not carried through the details in either case, we here sketch two ways that the existence of nontrivial slowly oscillating periodic solutions of Eq. (4) could be demonstrated. Let us write \(K_s \subset K\) for the set of initial conditions satisfying the following assumptions: \(x_0(s) \le -\alpha \) for all \(s \in [-1,-\alpha ]\), and \(x_0(s) = s\) for all \(s \in [-\alpha ,0]\). Given \(x_0 \in K_s\) with continuation \(x\) and writing \(z\) for the first positive zero of \(x\), we can make similar calculations as in the above proposition to show that \(-x_z \in K_s\) as well. An application of Schauder’s theorem to the square of the map \(x_0 \mapsto -x_z\) on the compact, convex set \(K_s\) now shows that Eq. (4) has a nontrivial slowly oscillating periodic solution. Alternatively, since \(d(x(t),x(t-r)) = 1\) when \(|x(t)|\) is close to zero, if we assume that \(f\) is differentiable with \(f'(0) < \pi /2\) the now-standard arguments in [12] show that \(0\) is an ejective fixed point of an appropriate Poincaré map, and that a nontrivial slowly oscillating periodic solution exists.

In the next section we show that there is a stable rapidly oscillating periodic solution \(p\) of Eq. 4. The initial condition \(p_0\) of this solution will satisfy the hypotheses of Proposition 3.3. The figure below shows the numerical approximation of a solution whose initial condition is near such an initial condition \(p_0\); note the apparent convergence to \(p\). A solution apparently converging to a slowly oscillating periodic solution is also shown. (For this figure, the parameters are again as in the proof of Lemma 3.2, and \(\alpha = 1/200\).)

figure d

The next figure shows the numerical approximation of the continuation of another initial condition near a segment of a stable rapidly oscillating solution \(p\) (for this figure the parameters are the same as in the last figure, expect that \(m = 1.3\)). The thicker line shows the solution; the thinner line shows, for \(t \ge 0\), the value of \(y(t) = x(t - d(x(t),x(t-r)))\). The dashed lines are at heights \(\pm \alpha \). Let us write \(z_1 < z_2 < z_3 < \cdots \) for the positive zeros of this solution \(x\). Observe that \(-x_{z_1}\) does not satisfy assumptions \((I)\), and that accordingly \(|y(t)| < \alpha \) for multiple subintervals of \((z_1,z_2)\). \(x_0\) is close enough to \(p_0\), however, that \(x_{z_4}\) apparently does satisfy assumptions \((I)\). As will become clear next section, if \(x_{z_4}\) satisfies assumptions \((I)\) and is close enough to \(p_0\), convergence of \(x\) to \(p\) is now assured.

figure e

4 The Stable Rapidly Oscillating Periodic Solution of Eq. (4)

In this section we use Proposition 3.3 to identify a particular rapidly oscillating periodic solution of Eq. (4), and prove its stability.

Let Eq. (4), \(\alpha \), \(r\), \(\gamma _1\), \(\gamma _2\), \(\gamma \), \(\kappa \), and \(m\) be as in the previous section; we assume that hypotheses (i–ix) hold. We recall the notation

$$\begin{aligned} z_* = \frac{2 + 2m\gamma }{5 + m}. \end{aligned}$$

Following Lemma 3.2, we henceforth view \(\alpha \), \(r\), \(\gamma _1\), \(\gamma _2\), \(\gamma \), and \(\kappa \) and \(\epsilon _* \in (0,\alpha )\) as fixed such that conditions (C1)–(C8) are satisfied for all

$$\begin{aligned} \zeta \in \big (2z_* - \epsilon _*,2z_* + \epsilon _*\big ) \ \ \ \hbox {and for all} \ \ \ \ell \in \big (z_* - 2\alpha - \epsilon _*, z_* - 2\alpha + \epsilon _*\big ). \end{aligned}$$
(10)

We also assume that (6) holds, so that the interval \((-1,0)\) contains

$$\begin{aligned} \big [-2z_* - 2\alpha ,-2z_* + 2\alpha \big ] \ \ \hbox {and} \ \ \big [-z_* - 2\alpha ,-z_* + 2\alpha \big ] \end{aligned}$$

but the interval \([-1,0]\) is disjoint from \([-3z_* - 2\alpha ,-3z_* + 2\alpha ]\).

We now define some more particular subsets of initial conditions that satisfy assumptions \((I)\). For any \(\epsilon \in (0,\epsilon _*]\), we take \(\Omega _\epsilon \subset K\) to be the subset of all initial conditions \(x_0 \in K\) satisfying the following:

  • \(x_0(0) = 0\) and \(x_0(-1) \le -\alpha \);

  • \(|x_0(s)| < \alpha \) on precisely three subintervals of \([-1,0]\), and these subintervals are

    $$\begin{aligned} I_{-2} = \big (-u_{-1} - u_{-2}-\alpha ,-u_{-2}-u_{-1} +\alpha \big ), \\ I_{-1} = \big (-u_{-1}-\alpha ,-u_{-1}+\alpha \big ), \ \ I_0 = (-\alpha ,0] \end{aligned}$$

    (here \(u_{-1}\) and \(u_{-2}\) vary with \(x_0\));

  • \(x_0\) has constant slope \(1\) on \(I_0\) and \(I_{-2}\), and constant slope \(-1\) on \(I_{-1}\);

  • \(|u_{-2} - z_*| < \epsilon /2\) and \(|u_{-1} - z_*| < \epsilon /2\).

A picture of a typical member \(x_0\) of \(\Omega _\epsilon \) is below. The darkened intervals about \(-z_*\) and \(-2z_*\) have radius \(\epsilon \). Observe that the zeros of \(x_0\) are precisely \(-u_{-2} - u_{-1} < -u_{-1} < 0\). We emphasize that \(\Omega _\epsilon \) does not have diameter \(2\epsilon \); various members of \(\Omega _{\epsilon }\) are only guaranteed to be close to one another near \(0\), \(-z_*\), and \(-2z_*\).

figure f

Let us define the map \(Z : \Omega _{\epsilon } \rightarrow \mathbb {R}^2\) by \(Z(x_0) = (u_{-1},u_{-2})\). (We endow \(\mathbb {R}^2\) with the sup metric.) Since any member of \(\Omega _\epsilon \) has slope \(\pm 1\) around its negative zeros, we have the following.

Lemma 4.1

\(Z: \Omega _\epsilon \rightarrow \mathbb {R}^2\) is continuous and open.

Proof

Suppose that \(x_0\) and \(w_0\) are in \(\Omega _\epsilon \), with \(\Vert x_0 - w_0\Vert = \delta \le \epsilon \). Let \(-\zeta = -u_{-1} - u_{-2}\) and \(-\zeta '= -u'_{-1} - u'_{-2}\) be the most negative zeros of \(x_0\) and \(w_0\), respectively. Then

$$\begin{aligned} \big |w_0(-\zeta )\big | \le \delta \le \epsilon \le \epsilon _* < \alpha . \end{aligned}$$

Since \(w_0\) has slope \(\pm 1\) wherever \(|w_0(s)| < \alpha \), we have that \(| \zeta ' - \zeta | \le \delta \). A similar argument shows that \(|u_{-1} - u'_{-1}| \le \delta \). That \(Z\) is continuous (in fact, Lipschitz continuous with Lipschitz constant 1) follows. On the other hand, drawing a picture it is easy to see that an open ball of size \(\delta \) in \(\Omega _\epsilon \) about \(x_0\), for all \(\delta \) sufficiently small, has image under \(Z\) that is an open ball of radius \(\delta \) in \(\mathbb {R}^2\) about \(Z(x_0)\). Thus \(Z\) is open too. \(\square \)

The most negative zero of \(x_0 \in \Omega _\epsilon \) is, again, \(-u_{-2} - u_{-1}\), and we have

$$\begin{aligned} \big |-u_{-2} - u_{-1} - 2z_*\big | < \epsilon \le \epsilon _*. \end{aligned}$$

Thus \(x_0\) satisfies the assumptions \((I)\) of the last section, with \(u_{-2} + u_{-1}\) in the role of \(\zeta \) and \(u_{-2} - 2\alpha \) in the role of \(\ell \) (note that with \(\zeta \) and \(\ell \) thus specified, the bounds in (10) are satisfied). We therefore have the following.

Lemma 4.2

If \(x_0 \in \Omega _\epsilon \) with \(\epsilon \le \epsilon _*\), then \(x_0\) satisfies assumptions \((I)\) of last section with \(\zeta = u_{-1} + u_{-2}\) and \(\ell = u_{-2} - 2\alpha \), and so Proposition 3.3 applies to \(x_0\). In particular, if we write \(x\) for the continuation of \(x_0\) as a solution of Eq. (4) and \(z\) for the first positive zero of \(x\), we have the following:

  1. (i)

    \(x|_{[0,z]}\) is completely and continuously determined by \(u_{-1} + u_{-2}\);

  2. (ii)

    \(x'(t) = 1\) on \((0,\alpha )\), \(x'(t) = -1\) on \((z - \alpha ,z)\), \(x(t) \ge \alpha \) on \([\alpha ,z - \alpha ]\), and \(x(z -1) \ge \alpha \);

  3. (iii)

    \(z\) is given by the formula

    $$\begin{aligned} z = \frac{2 + 2m\gamma }{1+m} - \frac{2}{1+m}(u_{-2} + u_{-1}); \end{aligned}$$
  4. iv)

    The map \(\Omega _{\epsilon } \ni x_0 \rightarrow x_z \in K\) is continuous.

Proof

The only point that isn’t a direct restatement of Proposition 3.3 is (iv); this is clear from the point (i), the Lipschitz continuity of \(x\), and the continuity of \(Z\). \(\square \)

Now observe that, in the situation described in the above lemma, the zeros of \(x_z\) are at \(0\), \(-z\), and \(-z - u_{-1}\). Let us define the following affine map \(M : \mathbb {R}^2 \rightarrow \mathbb {R}^2\):

$$\begin{aligned} M \left( \begin{array}{c} u_{-1} \\ u_{-2} \end{array} \right) = \left( \begin{array}{c} \frac{2 + 2m\gamma }{1+m} - \frac{2}{1+m}(u_{-2} + u_{-1}) \\ u_{-1} \end{array} \right) . \end{aligned}$$

Lemma 4.3

\((z_*,z_*)\) lies in \(Z(\Omega _{\epsilon })\) for all \(\epsilon \in (0,\epsilon _*)\), and is a globally attracting fixed point of \(M : \mathbb {R}^2 \rightarrow \mathbb {R}^2\).

Proof

By the definition of \(\Omega _{\epsilon }\) we certainly have that \((z_*,z_*)\) is in \(Z(\Omega _{\epsilon })\). Direct computation shows that \((z_*,z_*)\) is a fixed point of \(M\). The linear part of \(M\) is

$$\begin{aligned} \left( \begin{array}{cc} \frac{-2}{1+m} &{} \frac{-2}{1+m} \\ 1 &{} 0 \end{array} \right) . \end{aligned}$$

Since \(m > 1\), the eigenvalues of this matrix are distinct and strictly inside the unit circle; the lemma follows. \(\square \)

We continue to write \(x\) for the continuation of \(x_0\) as a solution of Eq. (4), and to write \(z\) for the first positive zero of \(x\). Since \((z_*,z_*)\) is a fixed point of \(M\) and \(M\) is continuous, if \(u_{-1}\) and \(u_{-2}\) are both close to \(z_*\) then \(z\) is also close to \(z_*\). In particular, we can take \((u_{-1},u_{-2})\) close enough to \((z_*,z_*)\) to guarantee that \(|z - z_*|< \epsilon _*/2\) and \(|u_{-1} - z_*| < \epsilon _*/2\). Moreover, as established in point (ii) of Lemma 4.2, we have that \(x(z-1) \ge \alpha \) (and so in particular \(z - 1\) lies between \(-u_{-2} - u_{-1} + \alpha \) and \(-u_{-1} - \alpha \)), that \(x'(t) = 1\) on \((0,\alpha )\), that \(x'(t) = -1\) on \((z-\alpha ,z)\), and that \(x(t) \ge \alpha \) on \([\alpha ,z - \alpha ]\). Therefore, \(|x_z(s)| < \alpha \) for \(s\) on precisely the three intervals

$$\begin{aligned} (-u_{-1} - z - \alpha ,-u_{-1} - z + \alpha ), \ \ (-z - \alpha ,-z + \alpha ), \ \ (-\alpha ,0], \end{aligned}$$

and on these intervals \(x_z\) has slope \(\pm 1\).

We have established the following.

Lemma 4.4

There is an \(\epsilon _0 \le \epsilon _*\) such that, if \(x_0 \in \Omega _{\epsilon _0}\), then \(-x_z \in \Omega _{\epsilon _*}\), and

$$\begin{aligned} Z\big (-x_z\big ) = M \big (Z(x_0)\big ). \ \ \ \end{aligned}$$

Repeating the above argument (shrinking \(\epsilon \) further if necessary) and appealing to the continuity of \(M\), Lemmas 3.1, and 4.2, we get the following proposition.

Proposition 4.5

There is an \(\epsilon _1 \in (0,\epsilon _0)\) such that, if \(x_0 \in \Omega _{\epsilon _1}\), then the following hold.

  1. (i)

    The first four positive zeros \(z_1 < z_2 < z_3 < z_4\) of \(x\) are defined, \(z_4 > 1\), and \((-1)^n x_{z_n} \in \Omega _{\epsilon _*}\) for all \(n \in \{1,2,3,4\}\);

  2. (ii)

    The map \(R: \Omega _{\epsilon _1} \rightarrow \Omega _{\epsilon _*}\) given by \(R(x_0) = x_{z_4}\) is continuous;

  3. (iii)

    \(Z(R(x_0)) = M^4(Z(x_0))\) for all \(x_0 \in \Omega _{\epsilon _1}\);

  4. (iv)

    If \(x_0\) and \(y_0\) in \(\Omega _{\epsilon _1}\) satisfy \(Z(x_0) = Z(y_0)\), then \(R(x_0) = R(y_0)\).

Proof

The only points that perhaps need amplification are the last part of (i), and (iv). Since \(4z_* > 1\), taking \(\epsilon _1\) small enough ensures that \(z_4 > 1\). It is clear that the restriction of \(x\) to \([0,z_4]\) depends only (and continuously) on \(Z(x_0)\), and so (since \(z_4 > 1\)) \(R(x_0)\) depends only (and continuously) on \(Z(x_0)\). Point iv) of the proposition follows. (It is in fact to guarantee this last part of the proposition that we define \(R\) as advancing solutions by four zeros, rather than the more natural-seeming two zeros). \(\square \)

Suppose that \(x_0 \in \Omega _{\epsilon _1}\) satisfies \(Z(x_0) = (z_*,z_*)\) (there certainly is such an \(x_0\)). Then, since \(Z(R(x_0)) = M^4(Z(x_0))\), we have that \(Z(R(x_0)) = (z_*,z_*)\) as well. In particular, \(R(x_0) \in \Omega _{\epsilon _1}\). Since \(R(x_0)\) is completely determined by \(Z(x_0)\) and \(R(R(x_0))\) is completely determined by \(Z(R(x_0)) = Z(x_0)\), we see that \(R(R(x_0)) = R(x_0)\)—that is, that \(p_0 := R(x_0)\) is a fixed point of \(R\). We have established the following.

Proposition 4.6

\(R\) has a fixed point \(p_0 \in \Omega _{\epsilon _1}\).

The continuation \(p\) of \(p_0\) as a solution of  Eq. (4) is a periodic solution with zeros at \(kz_*\), \(k \in \mathbb {Z}\), with period \(2z_*\), satisfying the symmetry

$$\begin{aligned} p(t + z_*) = -p(t) \quad \hbox { for all } \ t \in \mathbb {R}. \end{aligned}$$

We now complete the proof of Theorem 2.3. We proceed in a few steps.

Claim 1

There is a \(\bar{\gamma } \in (\alpha ,\gamma _1)\) and a \(\sigma > 0\) such that \(|p(t)| \le \bar{\gamma }\) implies that \(|p(t-1)| > \alpha + \sigma \).

Proof of Claim 1

The calculations of Proposition 3.3 show that \(|p(t-1)| \ge \alpha \) for all \(t \in [0,\gamma _1]\) (where \(p\) has constant slope \(1\)) and for all \(t \in [z_*-\gamma _1,z_*]\) (where \(p\) has constant slope \(-1\)). The periodicity and symmetry of \(p\) now implies that \(p\) has slope \(\pm 1\) whenever \(|p(t)| \le \gamma _1\), and that \(|p(t-1)| \ge \alpha \) for all such \(t\). Now choose \(\bar{\gamma } \in (\alpha , \gamma _1)\), and define

$$\begin{aligned} \nu = \min \{ \ |p(t-1)| \ : \ |p(t)| \le \bar{\gamma } \ \}. \end{aligned}$$

We know that \(\nu \ge \alpha \); if we imagine that \(\nu = \alpha \), then there is some \(\bar{t}\) such that \(|p(\bar{t})| \le \bar{\gamma }\) but \(|p(\bar{t} - 1)| = \alpha < \gamma _1\). Since \(p\) has slope \(\pm 1\) both near \(\bar{t}\) and \(\bar{t} - 1\), there are values of \(t\) near \(\bar{t}\) for which \(|p(t)| \le \gamma _1\) but \(|p(t-1)| < \alpha \)—a contradiction. This proves the claim. \(\square \)

Claim 2

Given \(\epsilon _2 \in (0,\epsilon _1]\), there is a neighborhood \(U\) about \(p_0\) in \(K\) such that, given \(x_0 \in U\) with continuation \(x\) as a solution of (4), there is some \(t_0 > 0\) for which \(x_{t_0} \in \Omega _{\epsilon _2}\).

Proof of Claim 2

From Proposition 2.1 we have the following: given any \(\eta > 0\) and \(T > 0\), there is some \(\delta > 0\) such that \(x_0 \in K\) and \(\Vert x_0 - p_0\Vert \le \delta \) implies that \(\Vert x_t - p_t\Vert < \eta \) for all \(t \in [0,T]\). From Claim 1 we have that, whenever \(|p(t)| \le \bar{\gamma }\), then \(|p(t-1)| > \alpha + \sigma \). Suppose that \(T > 5z_*\) and that \(\eta \le \min (\alpha , \bar{\gamma } - \alpha , \sigma , \epsilon _2)\). Then whenever \(|x(t)| \le \alpha \) for \(t \in [0,T]\), we have that \(|p(t)| \le \bar{\gamma }\) and so \(|x(t-1)| > \alpha \) and \(x'(t) = \pm 1\) (recall that \(|x(t)| \le \alpha < \gamma _1\) implies that \(d(x(t),x(t-r)) = 1\)). In particular, as \(t\) moves across the open interval \((z_*-\eta ,4z_* + \eta )\), we have that \(|x(t)| < \alpha \) on four subintervals, that \(|x'(t)| = \pm 1\) on each of these subintervals, and that the unique zeros of \(x\) on each of these subintervals are within \(\epsilon _2\) of a zero of \(p\). This tells us that there is an open neighborhood \(U\) about \(p_0\) in \(K\) such that solutions starting in \(U\) eventually flow into \(\Omega _{\epsilon _2}\). \(\square \)

Claim 3

There is an \(\epsilon _2 \in (0,\epsilon _1]\) such that, if \(x_0 \in \Omega _{\epsilon _2}\), \(R^n(x_0)\) is defined for all \(n \in \mathbb {N}\) and \(R^n(x_0) \rightarrow p_0\).

Proof of Claim 3

Note that if \(x_0 \in \Omega _{\epsilon _2}\), then \(|Z(x_0) - (z_*,z_*)| < \epsilon _2\). Since \((z_*,z_*)\) is a globally asymptotically attracting fixed point of \(M\), we can choose \(\epsilon _2\) such that \(|M^n(Z(x_0)) - (z_*,z_*)| < \epsilon _1\) for all \(n \in \mathbb {N}\). Suppose now that \(R^k(x_0)\) is defined and lies in \(\Omega _{\epsilon _1}\) for all \(k \in \{1,\ldots ,n\}\). Then \(R^{n+1}(x_0)\) is defined and lies in \(\Omega _{\epsilon _*}\) by Proposition 4.5. Repeated application of point (iii) of Proposition 4.5 shows that \(Z(R^{n+1}(x_0)) = M^{4(n+1)}(Z(x_0))\). Since \(M^{4(n+1)}(Z(x_0))\) is within \(\epsilon _1\) of \((z_*,z_*)\), \(R^{n+1}(x_0)\) actually lies in \(\Omega _{\epsilon _1}\). By induction we now have that \(R^n(x_0)\) is defined and lies in \(\Omega _{\epsilon _1}\) for all \(n\).

The continuity of \(R\) implies that, given \(\epsilon > 0\), there is a \(\delta > 0\) small enough that \(\Vert y_0 - p_0\Vert < \delta \) (and \(y_0 \in \Omega _{\epsilon _1}\)) implies that \(\Vert R(y_0) - p_0\Vert < \epsilon \). The global convergence of \(M\) implies that, given \(x_0 \in \Omega _{\epsilon _2}\), \(M^{4n}(Z(x_0))\) is within \(\delta \) of \((z_*,z_*) = Z(p_0)\) for all \(n\) sufficiently large. Now, we cannot conclude from this that \(R^n(x_0)\) is within \(\delta \) of \(p_0\), but there is an element \(\tilde{x}_0\) of \(\Omega _{\epsilon _1}\) with \(\Vert \tilde{x}_0 - p_0\Vert < \delta \) and \(Z(\tilde{x}_0) = Z(R^n(x_0))\). Since \(R(\tilde{x}_0) = R(R^n(x_0))\) by point iv) of Proposition 4.5, we have that \(R^{n+1}(x_0)\) is within \(\epsilon \) of \(p_0\). The claim follows. \(\square \)

Strictly speaking, since \(R\) “advances solutions by four zeros,” combining the above three claims we have proven the following: there a neighborhood \(U\) about \(p_0\) such that, given \(x_0 \in U\) with continuation \(x\) as a solution of Eq. (4), there is a sequence \(z_1 < z_2 < z_3 \cdots \) of successive positive zeros of \(x\) such that \(x_{z_{4k}} \rightarrow p_0\). By the continuity of the solution semiflow and the periodicity of \(p\), though, it is now clear that \(|z_{k+1} - z_k| \rightarrow z_*\) and that we actually have \(x_{z_{2k}} \rightarrow p_0\); Theorem 2.3 is proven.