1 Introduction

Arguably, negative Schwarzian derivative has been the most fortunate occurrence in the history of discrete dynamical systems.

Although named after Hermann Schwarz by Arthur Cayley, Schwarzian derivative was discovered by Lagrange in his treatise “Sur la construction des cartes géographiques” (1781) and also appeared in a 1836 paper by Kummer (the examiner of Schwarz’s doctoral dissertation and also his father in law!) [31]. Its natural realm is complex geometry, more precisely one-dimensional complex manifolds. Indeed, Schwarz proved in [34] that the isomorphisms (biholomorphic maps) in the Riemann sphere \(\overline{\mathbb {C}}\), that is, Möbius transformations \(h(z)=\frac{az+b}{cz+d}, ad-bc\ne 0\), are characterized by the following property: their Schwarzian derivative identically vanishes.

Fig. 1
figure 1

A map of the class S: the Shepherd function (1.6) with \(p=9, q=3\) and \(u=2\)

The Schwarzian derivative of h in z,

$$\begin{aligned} Sh(z)=\frac{h'''(z)}{h'(z)}-\frac{3}{2}\left( \frac{h''(z)}{h'(z)}\right) ^2, \end{aligned}$$

is well defined for all \(z\in \overline{\mathbb {C}}\) whenever h is a biholomorphic map, but of course it makes no sense to speak about the “sign” of Sh(z). If h is a \(C^3\) map defined on (a subinterval of) \(\mathbb {R}\), then we can still define Sh(x) (except if x is a critical point, that is, \(h'(x)=0\)), and it turns out that negative Schwarzian derivative is a very useful tool in real one-dimensional dynamics. This remarkable (and by no means obvious) discovering is usually attributed to Singer. In his 1978 paper [38] he proved two fundamental facts: (a) negative Schwarzian derivative is preserved by composition; and (b) if a diffeomorphism h has negative Schwarzian derivative, then \(|h'|\) satisfies the minimum principle. He used them to prove that maps with negative Schwarzian derivative share an important property with holomorphic maps in the Riemann sphere (already proved by Julia in 1918 [19]): some trivial cases excluded, the immediate basin of attraction of any attracting periodic orbit must contain a critical point. A simple consequence of this is that if h belongs to the class S below (see Fig. 1) and the fixed point u is locally attracting (which, for these maps, is equivalent to \(|h'(u)|\le 1\), as first noted by Sivak in [39]), then u is a global attractor of the dynamical system

$$\begin{aligned} x_{n+1}=h(x_n),\quad n\ge 0,\quad x_0\in I. \end{aligned}$$
(1.1)

Throughout the paper, I will always denote a subinterval of \(\mathbb {R}\). Note that I need not be either bounded or closed; in fact \(I=(0,\infty )\) is often used in relevant models.

Definition 1

We say that \(h:I\rightarrow I\) belongs to the class S if it has the following properties:

  1. (S1)

    h is a \(C^3\) map and \(h'\) vanishes at most at one point c (which is a relative extremum of h);

  2. (S2)

    there is \(u\in I\) such that \(h(x)>x\) (respectively, \(h(x)<x\)) for any \(x<u\) (respectively, \(x>u\));

  3. (S3)

    \(Sh(x)<0\) for any \(x\in I\) (except possibly at c).

Incidentally, Allwright arrived to pretty much the same conclusions as Singer’s in [1]; only, instead of the minimum principle, he used: (c) maps with negative Schwarzian derivative expand the cross ratio. Curiously enough, Allwright submitted to and published his paper (in the same journal!) some months earlier than Singer. Probably the reason why Allwright’s paper tends to be overlooked dates back to a very influential article published by Guckenheimer the next year [14], where only Singer is given explicit credit in regard to the above-mentioned results (to further confuse the issue, [1] is included in the list of references in [14], but it is never referred to).

This small blemish aside, Guckenheimer’s paper is very fine indeed. Just using (a) and (b) (and a lot of ingenuity) he was able to prove that maps belonging to the class S (one must additionally assume that I is compact and \(f''(c)<0\)) do not possess wandering intervals. As a consequence, he showed as well that if h has an attracting periodic orbit, then it attracts the orbits of almost all (in the sense of Lebesgue measure) points of I, thus giving rise to a brand new branch of one-dimensional dynamics (measure-theoretic dynamics of smooth interval maps) which has undergone an impressive development in the last thirty years.

With hindsight, all three properties (a), (b) and (c) are pretty natural. As we said before, Möbius maps (and their iterates, being Möbius maps as well) are characterized by having zero Schwarzian derivative and, as it turns out, by preserving the cross ratio. Moreover all holomorphic maps (and then their derivatives) are well known to satisfy the maximum modulus principle and, when mapping the open unit disk into itself, they contract the hyperbolic metric, which is directly related to the cross-ratio. These inversions “minimum-maximum” and “expands-contracts” are far from casual, and a new breakthrough was made by van Strien [42] by realizing that, under appropriate disjointness assumptions inspired by a relatively old paper by A. Schwartz [33] (not to confuse with Herman Schwarz!), the behavior of diffeomorphic inverse branches of iterates of smooth maps (not necessarily having negative Schwarzian derivative) is quite similar to that of univalent holomorphic maps. Now, distortion for these latter maps is governed by the classical Koebe principle which, after adequately translated to this setting, was later used to demonstrate the absence of wandering intervals for general smooth unimodal maps [9]. In a sense, the circle was closed by Kozlovski by proving a totally unexpected fact: first entry maps to sufficiently small neighbourhoods of the critical point of a smooth unimodal map always have negative Schwarzian derivative [20]. In the meantime new powerful topological tools, first implemented in [6], allowed to gradually extend these ideas to the multimodal realm, culminating in the impressive classification theorem of metric attractors proved in [43].

We have seen:

Theorem 1

(Allwright-Singer) If h belongs to the class S and \(|h'(u)|\le 1\), that is, u is a local attractor of (1.1), then u is a global attractor of (1.1).

Needless to say, the question whether local attraction (or L.A.S., abbreviating from “local asymptotical stability”, as it is often termed) may imply global attraction (or G.A.S., from “global asymptotical stability”), has been one of paramount importance both in discrete and continuous dynamics since Poincaré’s times. An example of some relevance in the ensuing discussion is the famous Wright’s delay-differential equation

$$\begin{aligned} y'(t)= -py(t-1)(1+y(t)),\quad p>0, \end{aligned}$$
(1.2)

whose study was initially motivated by the even more famous prime number theorem ([25] is a nice survey on the subject, see also [30]). It can be shown that L.A.S. for the zero constant solution amounts here to \(p<\frac{\pi }{2}\). In his celebrated paper [46], Wright proved G.A.S. for \(p\le \frac{3}{2}\) and conjectured (this is still an open problem; the best available result is [2]) that G.A.S. holds indeed whenever \(p<\frac{\pi }{2}\), that is, L.A.S. implies G.A.S. for Eq. (1.2).

After the change of variable \(x(t)=-\log (1+y(t))\), (1.2) becomes

$$\begin{aligned} x'(t)=p(e^{-x(t-1)}-1), \end{aligned}$$

which in turn arises as a particular case of the delay-differential equation

$$\begin{aligned} x'(t)=h(x(t-\tau )) \end{aligned}$$
(1.3)

by writing

$$\begin{aligned} h(x)=p(e^{-x}-1) \end{aligned}$$

and taking \(\tau =1\) as the delay time. Note that the Eq. (1.3) can be seen as the limiting case of another one prominently featured in the literature,

$$\begin{aligned} x'(t)=-\delta x(t)+h(x(t-\tau )),\quad \delta >0. \end{aligned}$$
(1.4)

The Nicholson blowflies equation [15] and the Mackey–Glass equation [28] are two examples of (1.4) of particular note; there, respectively,

$$\begin{aligned} h(x)=p x e^{-q x}, \quad p,q>0 \end{aligned}$$
(1.5)

(the so-called Ricker function), and

$$\begin{aligned} h(x)=\frac{p x}{1+x^q} \quad p,q>0 \end{aligned}$$
(1.6)

(the so-called Shepherd function) are used. Maybe inspired by Wright’s conjecture, Smith posed in [40, p. 116] the question whether L.A.S. implies G.A.S. for (1.4) when h is the map (1.5).

Equation (1.4) can be equivalently written as

$$\begin{aligned} x'(t)=-\delta x(t)+\delta h(x(t-\tau )),\quad \delta >0, \end{aligned}$$
(1.7)

just using \(\frac{h}{\delta }\) instead of h (but we rename “\(\frac{h}{\delta }\)” as “h” to keep notation simple). It is convenient to do this because then constant solutions of (1.7) and the discrete system (1.1) are exactly the same (note also that if 0 is a fixed point of h, then the constant zero map is a solution of (1.3)). Moreover, classical stability theory for delay equations (see for instance [41]) can be used to show that L.A.S. for (1.1) implies L.A.S. both for (1.3) and (1.7). (We are being a bit lousy here; in particular, \(h'(0)<0\) is also needed to get L.A.S. for (1.3).) This elicits the natural, and very interesting question, whether the same can be said, under appropriate assumptions for \(h:I\rightarrow I\), concerning global attraction, that is, whether the relative simple dynamics of the one-dimensional system (1.1) may “globally dominate”, so to say, those of much more complicated, in principle, infinite-dimensional dynamical systems like (1.3) or (1.7).

Table 1 Some relevant maps belonging to the class S

In the present context this was first realized, as far as we know, by Fisher in 1984 [13]. True, Fisher simplifies things a bit by working with a discretized (hence finite-dimensional) version of (1.7). Observe that after applying Euler’s method for a step size s so that \(\frac{\tau }{s}=k\) is a positive integer, and writing \(\alpha =1-\delta s\), (1.7) becomes

$$\begin{aligned} x_{n+1}=\alpha x_n + (1-\alpha )h(x_{n-k}),\quad n\ge 0, \quad (x_0,x_{-1},\ldots ,x_{-k})\in I^{k+1} \end{aligned}$$
(1.8)

(in order to ensure the well-definiteness of (1.8) we will always assume \(\alpha \in (0,1)\)). Observe that fixed points of (1.1) become equilibria of (1.8); Fisher proceeds to show that G.A.S. for (1.1) implies G.A.S. for (1.8). Only the continuity of the map h is needed to prove this fact, which is remarkable because when dealing with local stability we usually retort to linearization techniques, so we implicitly assume at least some smoothness near the fixed point. Soon thereafter, a similar result was obtained for (1.7) in [37, p. 244] and [29]; see also [25] for a proof in the context of Eq. (1.3) (in this last case some additional hypotheses on h are required, but we need not delve into the details).

Apart from being the discretization of (1.7), Eq. (1.8) is of independent interest. It is often referred to in the literature as Clark’s equation because it was first studied by Clark in a 1976 paper [8]. Yet it had appeared earlier (1963) in a model by Allen of whale populations; subsequently, in 1980, Beddington and May suggested to the International Whaling Commission to use a slight modification of

$$\begin{aligned} \textstyle {h(x)=x\left( 1+p\left( 1-\left( \frac{x}{z}\right) ^q\right) \right) ,} \quad p,q,z>0, \end{aligned}$$
(1.9)

in the particular case of the baleen whale [5]. A recent and very exhaustive survey on (1.8) is [23]. Let us add, to complete the picture, that L.A.S. for (1.1) also implies L.A.S. for (1.8) [21], and that Győri and Trofimchuk restated Smith’s problem in this setting by conjecturing that L.A.S. implies G.A.S. for (1.8) when h is the Ricker function (1.5) [17].

Let us summarize: we have that L.A.S. (respectively, G.A.S.) for (1.1) implies L.A.S. (respectively, G.A.S.) for the equations (1.3), (1.7) and (1.8). On the other hand, as we have explained, L.A.S. implies G.A.S. for (1.1) when h belongs to the class S, and the same thing probably happens to (1.3), (1.7) and (1.8) for some concrete maps h as those previously mentioned. Remarkably enough, all of them belong to the class S, see Table 1. Indeed, as shown in the pioneering papers [16, 17, 24], negative Schwarzian derivative can be used, with great effect, to get some partial results on global attraction for these systems also when the fixed point is unstable for the map h. The following result from [17] (which can be improved as shown in the same paper and also in [44]) clearly illustrates the idea. First, it is proved that G.A.S. for \(x_{n+1}=H(x_n)\), with \(H(x)=\alpha ^{k+1} u+(1-\alpha ^{k+1})h(x)\), implies G.A.S. for (1.8). But, as it is easy to check, if h belongs to the class S, then H belongs to the class S as well. Therefore, according to the Allwright-Singer theorem, if

$$\begin{aligned} |h'(u)|\le \frac{1}{1-\alpha ^{k+1}}, \end{aligned}$$
(1.10)

then we get G.A.S. for (1.8).

As explained at the beginning of this introduction, usefulness of Schwarzian derivative is so great as to be called unreasonable. Thus, in view of the previous discussion, the conjecture “L.A.S + negative Schwarzian derivative (modulo some reasonable monotonicity assumptions for h) implies G.A.S.” arises most naturally in the above-mentioned settings. It was first explicitly formulated in a series of papers written (although not published) more or less simultaneously [12, 2427]. In the case of Clark’s equation, the only one we are interested in the paper, additional numerical evidence was provided in [23, 45]. The main aim of this paper is to disprove this conjecture for \(k\ge 3\).

2 Basic Notions and Statements of the Main Results

Let us state precisely some notions used in the previous section. We say that a point u is the global attractor of the equation

$$\begin{aligned} x_{n+1}=g(x_{n},\ldots ,x_{n-k}), \quad n\ge 0, \quad (x_0,\ldots ,x_{-k})\in I^{k+1}, \end{aligned}$$
(2.1)

where \(g:I^{k+1}\rightarrow I\) is a continuous map, if all orbits \((x_n)_{n=-k}^\infty \) of (2.1) converge to u. It is simple to verify that if u is a global attractor, then it is an equilibrium of (2.1), that is \(u=g(u,\ldots ,u)\). We emphasize that in the case of Clark’s Eq. (1.8), u is an equilibrium if and only if it is a fixed point of h. We say that the equilibrium u is a local attractor of (2.1) if orbits with initial conditions \((x_0,\ldots ,x_{-k})\) close enough to u converge to u. We say that u is stable for (2.1) if for any \(\varepsilon >0\) there is \(\delta >0\) such that \(|x_n-u|<\delta \) for any \(n\in \{-k,\ldots ,0\}\) implies \(|x_n-u|<\varepsilon \) for all n. If u is not stable then it is called unstable. Global (respectively, local) stable attractors are called globally (respectively, locally) asymptotically stable, or, shortly, G.A.S. (respectively, L.A.S.). It is worth mentioning that global attractors are always stable in dimension one (\(k=0\)), see for instance [35], but this need not be the case if \(k\ge 1\) [36].

A weak version of global attraction is permanence. We say that (2.1) is permanent if there is a compact interval \(J\subset I\) such that all orbits eventually fall into J, that is, for every orbit \((x_n)\) there is a number \(n_0\) (depending on the orbit) such that \(x_n\in J\) for any \(n\ge n_0\). If \(I\ne \mathbb {R}\), then Clark’s equation is permanent (see, e.g., [10]).

L.A.S. for Clark’s equation will be analyzed in full detail in the next section; presently we recall the basics of it. Let u be a fixed point of h and write \(r=h'(u)\). After linearizing at u, and according to the well-known Hartman–Grobman theorem, we get that u is a local stable (respectively, unstable) attractor of (1.8) if all roots of the characteristic polynomial

$$\begin{aligned} \lambda ^{k+1}-\alpha \lambda ^k -(1-\alpha )r \end{aligned}$$
(2.2)

have modulus less than 1 (respectively, some of its roots have modulus greater than 1). We concentrate in the case \(|r|>1\) because, as we said in the previous section, in typical cases \(|r|\le 1\) even implies G.A.S. for (1.8). If \(r>1\), then it is immediate to check that (2.2) has a positive root larger than 1, which implies unstability. Thus the interesting case is \(r<-1\), the only one we will consider in this paper. It turns out that there is a number \(a_k(r), 0<a_k(r)<1\), such that all roots of (2.2) have modulus less than 1 if and only if \(\alpha >a_k(r)\). The curve \(\alpha =a_k(r)\) is strictly decreasing, with \(a_k(r)\rightarrow 1\) as \(r\rightarrow -\infty \) and \(a_k(r)\rightarrow 0\) as \(r\rightarrow -1\). It can be written, in parametric form, as

$$\begin{aligned} r=\frac{\sin (\theta )}{\sin ((k+1)\theta )-\sin (k\theta )},\quad \alpha =\frac{\sin ((k+1)\theta )}{\sin (k\theta )}, \quad \textstyle {\theta \in \left( \frac{\pi }{2k+1}, \frac{\pi }{k+1}\right) }. \end{aligned}$$

If \(k=1\) this amounts to \(a_1(r)=1+\frac{1}{r}\), that is, we have local attraction whenever \(\alpha >1+\frac{1}{r}\). It is worth comparing this to (1.10), which guarantees global attraction under the stronger hypothesis \(\alpha ^2>1+\frac{1}{r}\) (when h belongs to the class S).

We are ready to state precisely the conjecture we are going to investigate in this paper:

Assume that h belongs to the class S and \(r=h'(u)<-1\). Then u is a global (stable) attractor of (1.8) if and only if \(\alpha \ge a_k(r)\). As a consequence, L.A.S. implies G.A.S. for (1.8) whenever h belongs to the class S.

We can get additional insight on this conjecture by understanding what happens near the bifurcation parameter \(a_k(r)\). If \(\alpha =a_k(r)\), then (2.2) has two complex (conjugated) roots with modulus 1 while all other roots have modulus less than 1. In such a case, under appropriate smoothness assumptions for h (it suffices to assume, and so we will always do in the sequel, that h is \(C^4\) near u) a so-called Neimark–Sacker bifurcation occurs under generic conditions. Two alternatives are then possible when \(\alpha \) is close enough to \(a_k(r)\): either there is an invariant curve whenever \(\alpha <a_k(r)\) (the supercritical case) or there is an invariant curve whenever \(\alpha >a_k(r)\) (the subcritical case). Here, by an invariant curve, we mean a set \(C\subset I^{k+1}\) homeomorphic to the circle with the property that if the initial vector \((x_0,\ldots , x_{-k})\) of an orbit \((x_n)\) belongs to C, then all vectors \((x_n,\ldots , x_{n-k})\) belong to C as well. Subcritical Neimark–Sacker bifurcations are of special interest: then, for appropiately chosen values of the parameter \(\alpha \), a locally attracting equilibrium of (1.8) coexists with an invariant curve so it cannot be globally attracting.

Our first theorem shows that, surprisingly enough, negative Schwarzian derivative at u is closely related to the nature of the Neimark–Sacker bifurcation at \(a_k(r)\). In fact, when \(Sh(u)<0\), or equivalently,

$$\begin{aligned} \varSigma h(u):= \frac{h'''(u)h'(u)}{(h''(u))^2}<\frac{3}{2} \end{aligned}$$

(if \(h''(u)=0\), then we mean \(\varSigma h(u)=\infty >\frac{3}{2}, \varSigma h(u)=\frac{3}{2}\) or \(\varSigma h(u)=-\infty <\frac{3}{2}\) according to, respectively, \(h'''(u)<0, h'''(u)=0\) or \(h'''(u)>0\)), subcritical bifurcations are “almost” ruled out:

Theorem 2

Assume that one of the following conditions is satisfied:

  1. (a)

    \(k\le 2\) and \(\varSigma h(u)\le \frac{3}{2}\).

  2. (b)

    \(\varSigma h(u)\le \frac{2-r}{1-r}\).

  3. (c)

    \(\varSigma h(u)\le 1.49\).

  4. (d)

    \(r\le -1.18\) and \(\varSigma h(u)\le \frac{3}{2}\).

Then the equilibrium u of (1.8) exhibits a supercritical Neimark–Sacker bifurcation at \(\alpha =a_k(r)\).

Remark 1

Numerical estimations suggest that \(1.4928\ldots \) and \(-1.17483\ldots \) are, respectively, the best bounds in (c) and (d).

Thus, under the hypothesis of negative Schwarzian derivative of h at u, we get in particular that the bifurcation is always supercritical if \(k=1\) or \(k=2\), and the same thing happens, regardless the value of k, if r is not too close to \(-1\). For instance, as shown in Sect. 6, the bifurcation is supercritical for all maps from Table 1.

The following theorem is our key result. It shows that, against all odds, negative Schwarzian derivative and subcritical bifurcation may coexist:

Theorem 3

Let \(h_\varepsilon :I\rightarrow I, 0<\varepsilon <\varepsilon _0\), be a family of maps with corresponding fixed points \(u_\varepsilon \), and locally \(C^4\) at these points. Assume that the following conditions are satisfied:

  1. (i)

    The map \(D(\varepsilon )=h'_{\varepsilon }(u_{\varepsilon })\) is differentiable and \(\displaystyle {\lim _{\varepsilon \rightarrow 0}}D(\varepsilon )=-1, \displaystyle {\lim _{\varepsilon \rightarrow 0}}D'(\varepsilon )<0\);

  2. (ii)

    The map \(T(\varepsilon )=\varSigma h_{\varepsilon }(u_{\varepsilon })\) is differentiable and \(\displaystyle {\lim _{\varepsilon \rightarrow 0}}T(\varepsilon )=3/2, \displaystyle {\lim _{\varepsilon \rightarrow 0}}T'(\varepsilon )=0\).

Then, if \(k\ge 3, \varepsilon >0\) is small enough and we put \(h=h_\varepsilon , u=u_\varepsilon \) and \(r=h'(u)\), the equilibrium u of (1.8) exhibits a subcritical Neimark–Sacker bifurcation at \(\alpha =a_k(r)\). In particular, if \(\alpha >a_k(r)\) is close enough to \(a_k(r)\), then u is a local, but not global, attractor of (1.8).

A concrete family of rational decreasing bounded maps belonging to the class S and satisfying the hypotheses of Theorem 3 is given in Sect. 6.

If k is large enough, then hypothesis (i) in Theorem 3 can be slightly weakened:

Theorem 4

Let \(h_\varepsilon :I\rightarrow I, 0<\varepsilon <\varepsilon _0\), be a family of maps with corresponding fixed points \(u_\varepsilon \), and locally \(C^4\) at these points. Assume that the following conditions are satisfied:

  1. (i)

    The map \(D(\varepsilon )=h'_{\varepsilon }(u_{\varepsilon })\) is differentiable, and \(\displaystyle {\lim _{\varepsilon \rightarrow 0}}D(\varepsilon )=-1\) (with \(D'(\varepsilon )<0\) for any \(0<\varepsilon <\varepsilon _0\)), \(\displaystyle {\lim _{\varepsilon \rightarrow 0}}D'(\varepsilon )=0\);

  2. (ii)

    The map \(T(\varepsilon )=\varSigma h_{\varepsilon }(u_{\varepsilon })\) is differentiable and \(\displaystyle {\lim _{\varepsilon \rightarrow 0}}T(\varepsilon )=3/2, \lim _{\varepsilon \rightarrow 0}T'(\varepsilon )=0\);

  3. (iii)

    \(\displaystyle {\lim _{\varepsilon \rightarrow 0}}T'(\varepsilon )/D'(\varepsilon )<1/4\).

Then there is \(k_0\) such that if \(k\ge k_0\), \(\varepsilon >0\) is small enough and we put \(h=h_\varepsilon , u=u_\varepsilon \) and \(r=h'(u)\), the equilibrium u of (1.8) exhibits a subcritical Neimark–Sacker bifurcation at \(\alpha =a_k(r)\). In particular, if \(\alpha >a_k(r)\) is close enough to \(a_k(r)\), then u is a local, but not global, attractor of (1.8).

Theorems 3 and 4 follow from a more general result characterizing supercritical and subcritical Neimark–Sacker bifurcations in Clark’s equation. We delay its somewhat complicated statement to the end of Sect. 3 (Theorem 7). In the simplest case \(k=1\) we have:

Theorem 5

Assume \(k=1\). If \(\varSigma h(u)< \frac{1-2r}{1-r}\) (respectively, \(\varSigma h(u)>\frac{1-2r}{1-r}\)), then the equilibrium u of (1.8) exhibits a supercritical (respectively, subcritical) Neimark–Sacker bifurcation at \(\alpha =a_1(r)=1+\frac{1}{r}\).

Note that, since \(r<-1\), this result substantially refines Theorem 2(b) when \(k=1\).

While Theorem 2 is just local in nature, one is tempted to formulate, in view of results as that mentioned in (1.10), the following weaker version of the conjecture:

Assume that h belongs to the class \(S, r=h'(u)<-1\) and the Neimark–Sacker bifurcation at \(a_k(r)\) is supercritical. Then u is a global attractor of (1.8) if and only if \(\alpha \ge a_k(r)\). In particular, if \(k=1\) or \(k=2\), then u is a global attractor of (1.8) if and only if \(\alpha \ge a_k(r)\).

In fact, by combining a Neimark–Sacker bifurcation technique with rigorous numerics, a recent paper [3] prove it (for \(k=1\)) in the particular case \(h(x)=-p\tanh (x)\); here \(I=\mathbb {R}\) and \(u=0\). (See also [4] for a similar result concerning the so-called 2-dimensional Ricker map.) Our numerical explorations also point out in this direction; for instance, counterexample maps (6.1) from Sect. 6 seems to feature global attraction in the cases \(k=1\) and 2. The paper [45] is here worth a mention, because it is devoted to study the Neimark–Sacker bifurcation for (1.8) in the particular case of the Ricker function (1.5). The main theorem (Theorem 3) is wrong because their coefficient \(g_{21}\) is not correctly calculated (compare to [22, p. 187]) but a numerical example is given showing that, as we prove rigorously in this paper, the bifurcation is supercritical in this case.

As said before, we expect the conjecture under discussion to be true in the cases \(k=1\) and \(k=2\). Yet there seems to exist an interesting difference (first noted by Eduardo Liz) between these two values of k as far as metric attractors are concerned. These sets were briefly mentioned at the beginning of Sect. 1: a metric attractor is a compact subset of \(I^{k+1}\) containing the limit sets of the (vector) orbits of a positive Lebesgue measure set of points, and having no a strict subset with the same properties. It is well known that if a map h belongs to the class S, then it has at most one metric attractor (see, e.g., [7]). Curiously enough, numerical experiments suggest that if a decreasing map h belongs to the class S and \(k=1\), then (1.8) has exactly one metric attractor, a (vectorial) periodic orbit or an invariant curve, attracting the orbits of almost all vectors \((x_0,x_{-1})\) of initial conditions. This need not happen if \(k=2\): an example of such a map with two metric attractors can be found in [11].

Remark 2

The main results of this paper were announced, without proof, by the first author in [18]. Unfortunately, one hypothesis is missed in the statements of Theorems 3 and 4 there and, as a consequence, it is wrongly stated that the Neimark–Sacker bifurcation is subcritical for some parameter values of the Shepherd function. The counterexample (6.1) and the subsequent discussion in Sect. 6 can also be found in [18]; since the paper may not be generally available, they are reproduced here with just some cosmetic modifications.

3 The Neimark–Sacker Bifurcation for Clark’s Equation

In this section we analyze the local stability of Clark’s equation (1.8) near a fixed point u of h (and hence an equilibrium of (1.8)) in terms of the parameter \(\alpha \). Recall that we assume that h is sufficiently smooth near u (\(C^4\) is enough) and \(h'(u)<-1\).

Our starting point is the lemma below.

Lemma 1

Let k be a positive integer,

$$\begin{aligned} (\beta _k(\theta ),\alpha _k(\theta ))= \textstyle {\left( \frac{\sin \theta }{\sin (k\theta )}, \frac{\sin ((k+1)\theta )}{\sin (k\theta )}\right) }, \qquad \qquad \theta \in \left( \frac{\pi }{2k+1}, \frac{\pi }{k+1}\right) , \end{aligned}$$

and consider the equation

$$\begin{aligned} \lambda ^{k+1}-\alpha \lambda ^k+\beta =0,\quad 0<\alpha <1,\;\beta >0. \end{aligned}$$
(3.1)

Then all roots of (3.1) have modulus less than 1 (respectively, some root has modulus greater than 1) if and only if \(\alpha =\alpha _k(\theta )\) and \(\beta <\beta _k(\theta )\) (respectively, \(\beta >\beta _k(\theta )\)) for some \(\theta \). Moreover, if \(\alpha =\alpha _k(\theta )\) and \(\beta =\beta _k(\theta )\) for some \(\theta \), then (3.1) has exactly two simple (conjugate) complex roots of modulus 1, those given by \(e^{i\theta }\) and \(e^{-i\theta }\), while all other roots have modulus less than 1.

Remark 3

We emphasize that, when \(\theta \) moves from \(\frac{\pi }{2k+1}\) to \(\frac{\pi }{k+1}\), \(\beta _k(\theta )\) strictly increases from \(\frac{\sin (\frac{\pi }{2k+1})}{\sin (\frac{k\pi }{2k+1})}\) to 1 (except for \(\beta _1(\theta )\equiv 1\)) and \(\alpha _k(\theta )\) strictly decreases from 1 to 0. To check this, observe first that

$$\begin{aligned} \beta _k'(\theta ) =\frac{(k+1)\sin ((k-1)\theta )-(k-1)\sin ((k+1)\theta )}{2\sin ^2(k\theta )} \end{aligned}$$

and then note that the numerator has positive derivative, so it is positive as well. The opposite thing happens to

$$\begin{aligned} \alpha _k'(\theta ) =\frac{\sin ((2k+1)\theta )-(2k+1)\sin \theta }{2\sin ^2(k\theta )}. \end{aligned}$$

Proof (Lemma 1)

The first statement is proved in [21], see [32] for a simpler proof. The second statement is also implicitly shown in these papers; for the convenience of the reader we give here the short proof.

To begin with, observe that (3.1) has no multiple roots of modulus 1. The reason is that, in such a case, the polynomial \(p(\lambda )=\lambda ^{k+1}-\alpha \lambda ^k+\beta \) and its derivative \(p'(\lambda )=(k+1)\lambda ^k-k\alpha \lambda ^{k-1}\) would have a common root of modulus 1. This is not possible because \(0<\alpha <1\) and the only roots of \(p'(\lambda )\) are 0 and \(\frac{\alpha k}{k+1}\). Note also that 1 is not a root of (3.1) due to \(0<\alpha <1\) and \(\beta >0\).

Now fix \(\beta =\beta _k(\theta )\) and \(\alpha =\alpha _k(\theta )\) for some \(\theta \in (\frac{\pi }{2k+1}, \frac{\pi }{k+1})\). Then \(-1\) is not a root of (3.1) either, because this would imply \(|\beta |=1+\alpha \), which is impossible because \(0<\beta <1\). On the other hand, it is very easy to check directly that \(e^{i\theta }\) and \(e^{-i\theta }\) are roots of (3.1). Thus it only rests to show that there are no \(0<\theta _1<\theta _2<\pi \) with both \(e^{i\theta _1}\) and \(e^{i\theta _2}\) (and their conjugates) being roots of (3.1). But in such a case we get

$$\begin{aligned} \beta =|e^{i\theta _1}-\alpha |=|e^{-i\theta _1}-\alpha | =|e^{i\theta _2}-\alpha |=|e^{-i\theta _2}-\alpha |, \end{aligned}$$

hence circumferences \(|z|=1\) and \(|z-\alpha |=\beta \) intersect at least at four points. This means that both circumferences are the same, that is, \(\alpha =0, \beta =1\), a contradiction. \(\square \)

Let

$$\begin{aligned} r_k(\theta )=\frac{\sin \theta }{\sin ((k+1)\theta )-\sin (k\theta )} =\frac{\cos \left( \frac{\theta }{2}\right) }{\cos \left( \frac{(2k+1)\theta }{2}\right) }, \quad \theta \in \textstyle {\left( \frac{\pi }{2k+1}, \frac{\pi }{k+1}\right) }, \end{aligned}$$

when observe that \(r_k(\theta )\) strictly increases from \(-\infty \) to \(-1\) as \(\theta \) moves from \(\frac{\pi }{2k+1}\) to \(\frac{\pi }{k+1}\). Thus we can resolve \(\theta \) in \(r=r_k(\theta )\) as \(\theta =\theta _k(r)\) and write \(a_k(r)=\alpha _k(\theta _k(r))\). Note also that

$$\begin{aligned} \beta _k(\theta )=-(1-\alpha _k(\theta ))r_k(\theta ). \end{aligned}$$

We fix \(r=h'(u)\) in what follows. Since \(r_k(\theta )\) is strictly increasing, the line \(\beta =-(1-\alpha )r\) intersects the curve \((\beta _k(\theta ),\alpha _k(\theta ))\) in the plane \((\beta ,\alpha )\) exactly at one point, that given by \(\theta \) such that \(r=r_k(\theta )\). (Alternatively, this follows as well from the fact that the curve, when seen as the graph of a map \(\alpha =f(\beta )\), is convex. The proof of convexity is rather cumbersome; we will not use this property in the sequel, so we omit it.) Since the characteristic equation of (1.8) at u is precisely (3.1) for \(\beta =-(1-\alpha )r\), Lemma 1 implies that u is locally asymptotically stable if \(\alpha >a_k(r)\) and unstable if \(\alpha <a_k(r)\), the bifurcation arising at \(\alpha =a_k(r)\) being, as anticipated in the previous section, of the Neimark–Sacker type. More precisely, if \(e^{i\theta }\) is not a third or a fourth root of the unity (which is an immediate consequence of Lemma 1), then what happens to (1.8) in the vicinity of \(\alpha _k(\theta )=a_k(r)\) depends on two numbers \(b_k(\theta ), d_k(\theta )\), to be defined below, and is described by the following theorem:

Theorem 6

Assume that \(b_k(\theta )<0\) and \(\varepsilon >0\) is small enough. Then we have:

  1. (i)

    The supercritical case: \(\underline{d_k(\theta )<0}\). If \(\alpha _k(\theta )-\varepsilon <\alpha <\alpha _k(\theta )\), then there is an invariant (attracting) curve near u; if \(\alpha _k(\theta )\le \alpha <\alpha _k(\theta )+\varepsilon \), then there is no invariant curve near u.

  2. (ii)

    The subcritical case: \(\underline{d_k(\theta )>0}\). If \(\alpha _k(\theta )-\varepsilon <\alpha \le \alpha _k(\theta )\), then there is no invariant curve near u; if \(\alpha _k(\theta )<\alpha <\alpha _k(\theta )+\varepsilon \), then there is an (unstable) invariant curve near u.

“Near u” means of course near to the vector \((u,\ldots ,u)\) in \(\mathbb {R}^{k+1}\).

It can also be proved, although it will not be of use in this paper, that at the bifurcation point \(\alpha =\alpha _k(\theta ), u\) is stable and locally attracting if we are in the supercritical case, and unstable if we are in the subcritical case. For a detailed account of the Neimark–Sacker bifurcation, including a proof of Theorem 6, we refer the reader to [22, pp. 185–187].

We devote the rest of this section to clarify the nature of the Neimark–Sacker bifurcation at the parameter value \(\alpha _k(\theta )\), where \(\theta \) is the angle satisfying \(r=r_k(\theta )\). For simplicity of the notation, we write in what follows \(\alpha =\alpha _k(\theta ), \beta =\beta _k(\theta )\) and \(\mu _0=e^{i\theta }\).

To define the number \(b_k(\theta )\) we proceed as follows. First we consider the equation (again on the variable \(\lambda \), and just depending on the parameter c, since the number r has been fixed)

$$\begin{aligned} \lambda ^{k+1}-c \lambda ^k-(1-c)r=0. \end{aligned}$$
(3.2)

A number \(\rho e^{i\sigma }\) (\(\rho >0, \sigma \in \mathbb {R}\)) is a root of (3.2) if and only if

$$\begin{aligned} F(\rho ,\sigma ,c):= & {} \rho ^{k+1}\cos ((k+1)\sigma ) - c \rho ^k\cos (k\sigma )-(1-c)r = 0,\\ G(\rho ,\sigma ,c):= & {} \rho ^{k+1}\sin ((k+1)\sigma ) - c \rho ^k\sin (k\sigma ) = 0. \end{aligned}$$

Since \(\beta =-(1-\alpha )r\), we have

$$\begin{aligned} F(1,\theta ,\alpha )= & {} 0,\\ G(1,\theta ,\alpha )= & {} 0, \end{aligned}$$

by Lemma 1. Moreover, a direct calculation gives

$$\begin{aligned} F_\rho (1,\theta ,\alpha )= & {} (k+1)\cos ((k+1)\theta )-k\alpha \cos (k\theta ),\\ F_\sigma (1,\theta ,\alpha )= & {} -(k+1)\sin ((k+1)\theta )+k\alpha \sin (k\theta ),\\ G_\rho (1,\theta ,\alpha )= & {} (k+1)\sin ((k+1)\theta )-k\alpha \sin (k\theta ),\\ G_\sigma (1,\theta ,\alpha )= & {} (k+1)\cos ((k+1)\theta )-k\alpha \cos (k\theta ), \end{aligned}$$

and then

$$\begin{aligned} \begin{vmatrix} F_\rho (1,\theta ,\alpha )&F_\sigma (1,\theta ,\alpha ) \\ G_\rho (1,\theta ,\alpha )&G_\sigma (1,\theta ,\alpha ) \end{vmatrix}= & {} 1+2k+k^2\left( 1+\alpha ^2\right) -2k(1+k)\alpha \cos \theta \\> & {} 1+2k+k^2\left( 1+\alpha ^2\right) -2k(1+k)\alpha \\= & {} (k(1-\alpha )+1)^2\\> & {} 0. \end{aligned}$$

Therefore, for any c close enough to \(\alpha \) there is a uniquely defined number

$$\begin{aligned} \mu (c)=\rho (c) e^{i\sigma (c)}, \end{aligned}$$

with \(\rho (c)\) close to 1 and \(\sigma (c)\) close to \(\theta \), such that \(\mu (c)\) is a root of (3.2), with both \(\rho (c)\) and \(\sigma (c)\) being differentiable maps. Now, by definition,

$$\begin{aligned} b_k(\theta ):=\rho '(\alpha ). \end{aligned}$$

Recall that in Theorem 6 the assumption \(b_k(\theta )<0\) is made. This is precisely what happens here:

Lemma 2

We have \(b_k(\theta )<0\).

Proof

Observe that

$$\begin{aligned} F_c(1,\theta ,\alpha )= & {} -\cos (k\theta )+r,\\ G_c(1,\theta ,\alpha )= & {} -\sin (k\theta ). \end{aligned}$$

Then

$$\begin{aligned} \begin{vmatrix} -F_c(1,\theta ,\alpha )&F_\sigma (1,\theta ,\alpha ) \\ -G_c(1,\theta ,\alpha )&G_\sigma (1,\theta ,\alpha ) \end{vmatrix}= & {} -k\alpha +(k+1)\cos \theta +kr\alpha \cos (k\theta )-(k+1)r\cos ((k+1)\theta )\\= & {} -k\alpha +(k+1)\cos \theta +\frac{k\alpha \sin \theta \cos (k\theta )}{\sin ((k+1)\theta )-\sin (k\theta )}\\&-\frac{(k+1)\sin \theta \cos ((k+1)\theta )}{\sin ((k+1)\theta )-\sin (k\theta )}\\= & {} k\alpha \ \frac{\sin \theta \cos (k\theta )-\sin ((k+1)\theta )+\sin (k\theta )}{\sin ((k+1)\theta )-\sin (k\theta )}\\&+(k+1)\frac{\cos \theta \sin ((k+1)\theta )-\cos \theta \sin (k\theta )-\sin \theta \cos ((k+1)\theta )}{\sin ((k+1)\theta )-\sin (k\theta )}\\= & {} k\alpha \ \frac{\sin \theta \cos (k\theta )-\sin (k\theta )\cos \theta -\sin \theta \cos (k\theta )+\sin (k\theta )}{\sin ((k+1)\theta )-\sin (k\theta )}\\&+(k+1)\frac{\sin (k\theta )-\cos \theta \sin (k\theta )}{\sin ((k+1)\theta )-\sin (k\theta )}\\= & {} k\alpha \ \frac{\sin (k\theta )(1-\cos \theta )}{\sin ((k+1)\theta )-\sin (k\theta )}+(k+1)\frac{\sin (k\theta )(1-\cos \theta )}{\sin ((k+1)\theta )-\sin (k\theta )}\\= & {} \frac{(k\alpha +k+1)\sin (k\theta )(1-\cos \theta )}{\sin ((k+1)\theta )-\sin (k\theta )}\\< & {} 0, \end{aligned}$$

because \(\theta \in (\frac{\pi }{2k+1}, \frac{\pi }{k+1})\). Therefore,

$$\begin{aligned} \rho '(\alpha )=\frac{\begin{vmatrix} -F_c(1,\theta ,\alpha )&F_\sigma (1,\theta ,\alpha ) \\ -G_c(1,\theta ,\alpha )&G_\sigma (1,\theta ,\alpha ) \end{vmatrix}}{ \begin{vmatrix} F_\rho (1,\theta ,\alpha )&F_\sigma (1,\theta ,\alpha ) \\ G_\rho (1,\theta ,\alpha )&G_\sigma (1,\theta ,\alpha ) \end{vmatrix}}<0 \end{aligned}$$

as we desired to show. \(\square \)

Introducing the number \(d_k(\theta )\), and especially elucidating its sign, is more complicated. Note that we will use the bold type to denote vectors in \(\mathbb {C}^{n+1}\) but not their components, that is, \(\mathbf {p}=(p_0,\ldots ,p_k), \mathbf {q}=(q_0,\ldots ,q_k)\), and so on.

We begin by writing (1.8) in vectorial form as

$$\begin{aligned} \mathbf {z}_{n+1}=G(\mathbf {z}_n), \end{aligned}$$
(3.3)

in the sense that \((\mathbf {z}_n)_{n=0}^\infty \) is an orbit of (3.3) if and only if \((x_n)_{n=-k}^\infty \) is an orbit of (1.8), with \(\mathbf {z}_n=(x_n,\ldots , x_{n-k})\), by taking

$$\begin{aligned} G(v_{0},v_1,\ldots ,v_{k})=(\alpha v_{0}+(1-\alpha )h(v_{k}),v_{0},\ldots ,v_{k-1}). \end{aligned}$$

Let A be the Jacobian matrix of G at the fixed point \(\mathbf {u}=(u,\ldots ,u)\) of (3.3). The characteristic equations of (1.8) at u and (3.3) at \(\mathbf u \) coincide, hence two of the eigenvalues of A are \(\mu _0\) and its conjugate \(\overline{\mu _0}\). Of course they are also eigenvalues of the transposed matrix \(A^t\) of A. Let \(\mathbf {p}\in \mathbb {C}^{k+1}\) be an eigenvector of \(A^t\) with eigenvalue \(\overline{\mu _0}\), and let \(\mathbf {q}\in \mathbb {C}^{k+1}\) be an eigenvector of A with eigenvalue \(\mu _0\). Since these eigenvalues have multiplicity 1, it is possible to show (using the Fredholm alternative theorem and the Jordan decomposition of matrixes; we skip the details) that \(\langle \mathbf {p},\mathbf {q}\rangle \ne 0\), where \(\langle \mathbf {p},\mathbf {q}\rangle \) denote the standard scalar product in \(\mathbb {C}^{k+1}\), that is,

$$\begin{aligned} \langle \mathbf {p},\mathbf {q}\rangle = \overline{p_0}q_0+\cdots \overline{p_k}q_k. \end{aligned}$$

Then we can assume, without loss of generality, that \(\langle \mathbf {p},\mathbf {q}\rangle =1\).

Let also \(B:\mathbb {C}^{k+1}\times \mathbb {C}^{k+1}\rightarrow \mathbb {C}^{k+1}\) and \(C:\mathbb {C}^{k+1}\times \mathbb {C}^{k+1}\times \mathbb {C}^{k+1}\rightarrow \mathbb {C}^{k+1}\) be the vectorial polynomial maps given by

$$\begin{aligned} B(\mathbf {x},\mathbf {y})=\sum _{0\le j,l\le k}\left. \frac{\partial ^{2}G(\mathbf {v})}{\partial v_{j}\partial v_{l}}\right| _{\mathbf {v}=\mathbf {u}} x_{j}y_{l}, \end{aligned}$$

and

$$\begin{aligned} C(\mathbf {x},\mathbf {y}, \mathbf {z})=\sum _{0\le j,l,m\le k}\left. \frac{\partial ^{3}G(\mathbf {v})}{\partial v_{j}\partial v_{l}v_{m}}\right| _{\mathbf {v}=\mathbf {u}} x_{j}y_{l}z_m, \end{aligned}$$

and let \(I_{k+1}\) denote the \((k+1)\times (k+1)\) identity matrix (that having ones in the main diagonal and zeros elsewhere). Finally, let \(\mathrm{Re}(z)\) denote the real part of a complex number z. Then \(d_k(\theta )\) is given by

$$\begin{aligned} d_k(\theta ):= & {} \frac{1}{2}\mathrm{Re}\{\overline{\mu _{0}}(\langle \mathbf {p},C(\mathbf {q},\mathbf {q},\overline{\mathbf {q}})\rangle + 2\langle \mathbf {p},B(\mathbf {q},B(\mathbf {q},\overline{\mathbf {q}})(I_{k+1}-A^t)^{-1})\rangle \\&\quad +\langle \mathbf {p},B(\overline{\mathbf {q}},B(\mathbf {q},\mathbf {q})(\mu _{0}^{2}I_{k+1}-A^t)^{-1})\rangle )\}, \end{aligned}$$

Note that, due to Lemma 1, neither 1 nor \(\mu _0^2\) are eigenvalues of A, hence the above inverse matrices make sense. Strictly speaking \(d_k(\theta )\) is ambiguously defined, as the vectors \(\mathbf {p}\) and \(\mathbf {q}\) can be chosen in a variety of ways, but it can be checked that its sign (the only information we really need to know about \(d_k(\theta )\)) is always the same.

Let us now calculate \(d_k(\theta )\). We have

$$\begin{aligned} A=\begin{pmatrix} \alpha &{}\quad 0 &{}\quad \cdots &{}\quad 0 &{}\quad -\beta \\ 1 &{}\quad 0 &{}\quad \cdots &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 1 &{}\quad \cdots &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad \cdots &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad \cdots &{}\quad 1 &{}\quad 0 \end{pmatrix}, \end{aligned}$$

so eigenvectors \(\mathbf {v}\) of A with eigenvalue \(\mu _{0}\) are given by the equality

$$\begin{aligned} (\alpha v_{0}-\beta v_{k},v_{0}, v_{1},\ldots , v_{k-1})=(\mu _{0} v_{0},\mu _{0}v_{1},\ldots ,\mu _{0}v_{k}). \end{aligned}$$

Since the characteristic polynomial of A is \(\lambda ^{k+1}-\alpha \lambda ^k+\beta \), one such vector is

$$\begin{aligned} \mathbf {v}=(1,\overline{\mu _{0}},\ldots ,\overline{\mu _{0}}^k). \end{aligned}$$

Similarly, eigenvectors \(\mathbf {p}\) for \(A^{t}\) with eigenvalue \(\overline{\mu _{0}}\) are given by

$$\begin{aligned} (\alpha p_{0}+p_{1},p_{2},\ldots ,p_{k},-\beta p_{0})=(\overline{\mu _{0}}p_{0}, \overline{\mu _{0}}p_{1},\ldots ,\overline{\mu _{0}}p_{k}). \end{aligned}$$

Taking again \(p_{0}=1\), we get

$$\begin{aligned} \mathbf {p}=(1,-\beta \mu _{0}^{k},-\beta \mu _{0}^{k-1},\ldots ,-\beta \mu _{0}). \end{aligned}$$

Then we have

$$\begin{aligned} \langle \mathbf {p},\mathbf {v}\rangle= & {} \overline{p_{0}}v_{0}+\overline{p_{1}}v_{1}+\cdots +\overline{p_{k}}v_{k}\\= & {} 1-\beta \,\overline{\mu _{0}}^{k}\,\overline{\mu _{0}} -\beta \,\overline{\mu _{0}}^{k-1}\,\overline{\mu _{0}}^{2}\cdots -\beta \,\overline{\mu _{0}}\,\overline{\mu _{0}}^{k}\\= & {} 1-k\beta \,\overline{\mu _{0}}^{k+1}. \end{aligned}$$

We emphasize that, as expected, \(1-k\beta \,\overline{\mu _{0}}^{k+1}\ne 0\), because \(\theta \in (\frac{\pi }{2k+1}, \frac{\pi }{k+1})\) and then \((k+1)\theta \in (\frac{\pi }{2}, \pi )\), hence \(\overline{\mu _{0}}^{k+1}\notin \mathbb {R}\). Therefore

$$\begin{aligned} \mathbf {q}=\frac{1}{1-k\beta \,\overline{\mu _{0}}^{k+1}}\mathbf {v}= \left( \frac{1}{1-k\beta \,\overline{\mu _{0}}^{k+1}},\frac{\overline{\mu _{0}}}{1-k\beta \,\overline{\mu _{0}}^{k+1}}, \ldots ,\frac{\overline{\mu _{0}}^{k}}{1-k\beta \,\overline{\mu _{0}}^{k+1}}\right) \end{aligned}$$

is a well-defined eigenvector of A with eigenvalue \(\mu _0\) and \(\langle \mathbf {p},\mathbf {q}\rangle =1\).

The maps \(B(\mathbf {x},\mathbf {y})\) and \(C(\mathbf {x},\mathbf {y},\mathbf {z})\) are easy to calculate, their components being

$$\begin{aligned} B_{i}(\mathbf {x},\mathbf {y})= & {} {\left\{ \begin{array}{ll} (1-\alpha )sx_{k}y_{k}&{} \text {if }i=0, \\ 0 &{} \text {if }i=1,2,\ldots ,k, \end{array}\right. }\\ C_{i}(\mathbf {x},\mathbf {y}, \mathbf {z})= & {} {\left\{ \begin{array}{ll} (1-\alpha )tx_{k}y_{k}z_{k}&{} \text {if }i=0, \\ 0 &{} \text {if }i=1,2,\ldots ,k, \end{array}\right. } \end{aligned}$$

with \(s=h''(u)\) and \(t=h'''(u)\). Then

$$\begin{aligned} C(\mathbf {q},\mathbf {q},\overline{\mathbf {q}})= \left( \frac{(1-\alpha )t\,\overline{\mu _{0}}^{k}}{\left( 1-k\beta \, \overline{\mu _{0}}^{k+1}\right) ^{2}\left( 1-k\beta \mu _{0}^{k+1}\right) }, 0,\ldots ,0\right) \end{aligned}$$

and

$$\begin{aligned} \begin{aligned}&\left\langle \mathbf {p},C(\mathbf {q},\mathbf {q},\overline{\mathbf {q}})\right\rangle \\&=\frac{(1-\alpha )t\,\overline{\mu _{0}}^{k}}{\left( 1-k\beta \,\overline{\mu _{0}}^{k+1}\right) ^{2} \left( 1-k\beta \mu _{0}^{k+1}\right) } =\frac{(1-\alpha )t\,\overline{\mu _{0}}^{k}}{\gamma \left( 1-k\beta \,\overline{\mu _{0}}^{k+1}\right) } =\frac{(1-\alpha )t}{\gamma \left( \mu _{0}^{k}-k\beta \,\overline{\mu _{0}}\right) }; \end{aligned} \end{aligned}$$
(3.4)

here we mean

$$\begin{aligned} \gamma =(1-k\beta \,\overline{\mu _{0}}^{k+1}) \left( 1-k\beta \mu _{0}^{k+1}\right) =|1-k\beta \mu _{0}^{k+1}|^{2}. \end{aligned}$$

We next calculate \(\langle \mathbf {p},B(\mathbf {q},B(\mathbf {q}, \overline{\mathbf {q}})(I_{k+1}-A^t)^{-1})\rangle \). On the one hand,

$$\begin{aligned} B(\mathbf {q},\overline{\mathbf {q}})=((1-\alpha )s/\gamma ,0,\ldots ,0). \end{aligned}$$

If, on the other hand, we write \(\mathbf {x}=B(\mathbf {q},\overline{\mathbf {q}})(I_{k+1}-A^t)^{-1}\), or, equivalently, \((I_{k+1}-A)\mathbf {x}^t=B(\mathbf {q},\overline{\mathbf {q}})^t\), and recall that

$$\begin{aligned} I_{k+1}-A=\begin{pmatrix} 1-\alpha &{}0 &{}\cdots &{}0 &{}\beta \\ -1 &{}1 &{}\cdots &{}0 &{}0 \\ 0 &{}-1 &{}\cdots &{}0 &{}0 \\ 0 &{}0 &{}\cdots &{}1 &{}0 \\ 0 &{}0 &{}\cdots &{}-1 &{}1 \end{pmatrix}, \end{aligned}$$

we see that \(\mathbf {x}\) can be calculated by solving the system

$$\begin{aligned} (1-\alpha )x_{0}+\beta x_{k}= & {} (1-\alpha )s/\gamma \\ -x_{0}+x_{1}= & {} 0 \\ -x_{1}+x_{2}= & {} 0\\&\vdots&\\ -x_{k-1}+x_{k}= & {} 0 \end{aligned}$$

From the last k equations we deduce \(x_{0}=x_{1}=\cdots =x_{k}\), which together with the first one implies

$$\begin{aligned} x_{0}=\frac{(1-\alpha )s}{\gamma (1-\alpha +\beta )}. \end{aligned}$$

Therefore

$$\begin{aligned} B(\mathbf {q},\overline{\mathbf {q}})\left( I_{k+1}-A^t\right) ^{-1}= & {} \left( \frac{(1-\alpha )s}{\gamma (1-\alpha +\beta )}, \frac{(1-\alpha )s}{\gamma (1-\alpha +\beta )},\ldots ,\frac{(1-\alpha )s}{\gamma (1-\alpha +\beta )}\right) ,\\ B\left( \mathbf {q},B(\mathbf {q},\overline{\mathbf {q}})\left( I_{k+1}-A^t\right) ^{-1}\right)= & {} \left( \frac{(1-\alpha )^{2}s^{2}\,\overline{\mu _{0}}^{k}}{\gamma (1-\alpha +\beta ) \left( 1-k\beta \,\overline{\mu _{0}}^{k+1}\right) },0,\ldots ,0\right) \\= & {} \left( \frac{(1-\alpha )^{2}s^{2}}{\gamma (1-\alpha +\beta ) \left( \mu _{0}^{k}-k\beta \,\overline{\mu _{0}}\right) },0,\ldots ,0\right) , \end{aligned}$$

and finally

$$\begin{aligned} \left\langle \mathbf {p},B\left( \mathbf {q},B(\mathbf {q}, \overline{\mathbf {q}})\left( I_{k+1}-A^t\right) ^{-1}\right) \right\rangle =\frac{(1-\alpha )^{2}s^{2}}{\gamma (1-\alpha +\beta ) \left( \mu _{0}^{k}-k\beta \,\overline{\mu _{0}}\right) }. \end{aligned}$$
(3.5)

To complete the calculation of \(d_k(\theta )\) we must take care of the product

$$\begin{aligned} \left\langle \mathbf {p},B\left( \overline{\mathbf {q}},B\left( \mathbf {q}, \mathbf {q}\right) \left( \mu _{0}^{2}I_{k+1}-A^t\right) ^{-1}\right) \right\rangle . \end{aligned}$$

First of all, we have

$$\begin{aligned} B(\mathbf {q},\mathbf {q})= \left( \frac{(1-\alpha )s\,\overline{\mu _{0}}^{2k}}{\left( 1-k\beta \, \overline{\mu _{0}}^{k+1}\right) ^{2}},0,\ldots ,0\right) \end{aligned}$$

and

$$\begin{aligned}\mu _{0}^{2}I_{k+1}-A= \begin{pmatrix} \mu _{0}^{2}-\alpha &{}0&{}\cdots &{}0&{}\beta \\ -1 &{}\mu _{0}^{2}&{}\cdots &{}0&{}0\\ 0 &{}-1 &{}\cdots &{}0&{}0\\ 0 &{}0&{}\cdots &{}\mu _{0}^{2}&{}0\\ 0 &{}0&{}\cdots &{}-1 &{}\mu _{0}^{2} \end{pmatrix}. \end{aligned}$$

If, as we did earlier, we write \(\mathbf {x}=B(\mathbf {q},\mathbf {q})(\mu _{0}^{2}I_{k+1}-A^t)^{-1}\), we can calculate \(\mathbf {x}\) by solving

$$\begin{aligned} (\mu _{0}^{2}-\alpha )x_{0}+\beta x_{k}&=\frac{(1-\alpha )\,s\overline{\mu _{0}}^{2k}}{\left( 1-k\beta \, \overline{\mu _{0}}^{k+1}\right) ^{2}}\\ -x_{0}+\mu _{0}^{2}x_{1}&=0 \\ -x_{1}+\mu _{0}^{2}x_{2}&=0 \\ \vdots \\ -x_{k-1}+\mu _{0}^{2}x_{k}&=0 \end{aligned}$$

Namely, the last k equations imply

$$\begin{aligned} x_{0}=\mu _{0}^{2}\,x_{1}=\mu _{0}^{4}\,x_{2}=\cdots =\mu _{0}^{2k}x_{k} \end{aligned}$$

and the first one

$$\begin{aligned} x_{k}=\frac{(1-\alpha )s\,\overline{\mu _{0}}^{2k}}{\left( 1-k\beta \, \overline{\mu _{0}}^{k+1}\right) ^{2}\left( \left( \mu _{0}^{2}-\alpha \right) \mu _{0}^{2k}+\beta \right) }. \end{aligned}$$

Thus

$$\begin{aligned} \begin{aligned} B(\mathbf {q},\mathbf {q})\left( \mu _{0}^{2}I_{k+1}-A^t\right) ^{-1}&= \left( \frac{(1-\alpha )s}{\left( 1-k\beta \,\overline{\mu _{0}}^{k+1}\right) ^{2} \left( \left( \mu _{0}^{2}-\alpha \right) \mu _{0}^{2k}+\beta \right) }\right. ,\\&\qquad \frac{(1-\alpha )s\,\overline{\mu _{0}}^2}{\left( 1-k\beta \,\overline{\mu _{0}}^{k+1}\right) ^{2} \left( \left( \mu _{0}^{2}-\alpha \right) \mu _{0}^{2k}+\beta \right) },\ldots ,\\&\qquad \left. \frac{(1-\alpha )s\, \overline{\mu _{0}}^{2k}}{\left( 1-k\beta \,\overline{\mu _{0}}^{k+1}\right) ^{2} \left( \left( \mu _{0}^{2}-\alpha \right) \mu _{0}^{2k}+\beta \right) }\right) \end{aligned} \end{aligned}$$

and

$$\begin{aligned}&\left\langle \mathbf {p}, B\left( \overline{\mathbf {q}},B(\mathbf {q},\mathbf {q}) (\mu _{0}^{2}I_{k+1}-A^t)^{-1}\right) \right\rangle \nonumber \\&\quad = \frac{(1-\alpha )^{2}s^{2}\,\overline{\mu _{0}}^{k}}{\left( 1-k\beta \,\overline{\mu _{0}}^{k+1}\right) ^{2} \left( 1-k\beta \mu _{0}^{k+1}\right) \left( \left( \mu _{0}^{2}-\alpha \right) \mu _{0}^{2k}+\beta \right) }\nonumber \\&\quad = \frac{(1-\alpha )^{2}s^{2}\,\overline{\mu _{0}}^{k}}{\gamma \left( 1-k\beta \,\overline{\mu _{0}}^{k+1}\right) \left( \left( \mu _{0}^{2}-\alpha \right) \mu _{0}^{2k}+\beta \right) }\nonumber \\&\quad =\frac{(1-\alpha )^{2}s^{2}}{\gamma \left( \mu _{0}^{k}-k\beta \, \overline{\mu _{0}}\right) \left( \left( \mu _{0}^{2}-\alpha \right) \mu _{0}^{2k}+\beta \right) }. \end{aligned}$$
(3.6)

Bringing together (3.4), (3.5) and (3.6), we finally obtain

$$\begin{aligned} d_k(\theta )= & {} \frac{1}{2}\mathrm{Re}\left( \frac{(1-\alpha )t}{\gamma \left( \mu _{0}^{k+1}-k\beta \right) } +\frac{2(1-\alpha )^{2}s^{2}}{\gamma (1-\alpha +\beta )\left( \mu _{0}^{k+1}-k\beta \right) }\right. \\&\qquad +\left. \frac{(1-\alpha )^{2}s^{2}}{\gamma (\mu _{0}^{k+1}-k\beta )\left( \left( \mu _{0}^{2}-\alpha \right) \mu _{0}^{2k}+\beta \right) }\right) \\= & {} \frac{1}{2}\mathrm{Re}\left( \frac{(1-\alpha )t\overline{\psi }}{\gamma |\psi |^2} +\frac{2(1-\alpha )^{2}s^{2}\overline{\psi }}{\gamma (1-\alpha +\beta )|\psi |^2} +\frac{(1-\alpha )^{2}s^{2}\overline{\psi }}{\gamma |\psi |^2\omega }\right) \\= & {} \frac{(1-\alpha )\mathrm{Re}(\psi )}{2\gamma |\psi |^{2}}\left( t +\frac{2(1-\alpha )s^{2}}{1-\alpha +\beta } +\frac{(1-\alpha )s^{2}}{\mathrm{Re}(\psi )}\mathrm{Re}\left( \frac{\overline{\psi }}{\omega }\right) \right) , \end{aligned}$$

where \(\psi =\mu _{0}^{k+1}-k\beta \) and \(\omega =(\mu _{0}^{2}-\alpha )\mu _{0}^{2k}+\beta \), when observe that

$$\begin{aligned} \mathrm{Re}(\psi )=\cos ((k+1)\theta )-k\beta <0 \end{aligned}$$

because \((k+1)\theta \in (\frac{\pi }{2},\pi )\). Therefore, \(\frac{2\gamma |\psi |^{2}}{(1-\alpha )\mathrm{Re}(\psi )r}>0\) and \(d_k(\theta )\) has the same sign as

$$\begin{aligned} \frac{t}{r}+\frac{2(1-\alpha )r s^{2}}{(1-\alpha +\beta )r^2} +\frac{(1-\alpha )rs^{2}}{\mathrm{Re}(\psi )r^2}\mathrm{Re}\left( \frac{\overline{\psi }}{\omega }\right) = \frac{t}{r}-M_k(\theta )\frac{s^2}{r^2}, \end{aligned}$$

with

$$\begin{aligned} M_k(\theta )=\frac{2\beta }{1-\alpha +\beta }+ \frac{\beta }{\mathrm{Re}(\psi )}\mathrm{Re}\left( \frac{\overline{\psi }}{\omega }\right) . \end{aligned}$$

Observe that if we write

$$\begin{aligned} \sigma =\left( 1+\mu _0^{k+1}\right) \left( 1+\mu _0^k\right) -1, \end{aligned}$$

then we have

$$\begin{aligned} \omega= & {} \left( \mu _{0}^{2}-\alpha \right) \mu _{0}^{2k}+\beta -\mu _0^k +\alpha \mu _0^{k+1}-\beta \mu _0^{2k+1} \\= & {} -\mu _0^k+\alpha \mu _0^{k+1}+\alpha \mu _0^{2k+1}-\beta \mu _0^{k+1} -\beta \mu _0^{2k+1} -\alpha \mu _0^{2k} +\beta \\= & {} -\mu _0^k -\mu _0^{2k+1}+\alpha \mu _0^{k+1}+\alpha \mu _0^{2k+1}-\beta \mu _0^{k+1} -\beta \mu _0^k-\beta \mu _0^{2k+1}+\beta \\= & {} -\mu _0^{k+1}-\mu _0^k -\mu _0^{2k+1} +\alpha \mu _0^{k+1}+\alpha \mu _0^k+\alpha \mu _0^{2k+1} -\beta \mu _0^{k+1} -\beta \mu _0^k-\beta \mu _0^{2k+1} \\= & {} -(1-\alpha +\beta )\left( \mu _0^{k+1}+\mu _0^k +\mu _0^{2k+1}\right) \\= & {} -(1-\alpha +\beta )\sigma ; \end{aligned}$$

we have used several times that \(\mu _0\) (and its conjugate) are roots of \(\lambda ^{k+1}-\alpha \lambda ^k+\beta =0\). Thus

$$\begin{aligned} M_k(\theta )=\frac{2\beta }{1-\alpha +\beta }\left( 1- \frac{1}{2\mathrm{Re}(\psi )} \mathrm{Re}\left( \frac{\overline{\psi }}{\sigma }\right) \right) . \end{aligned}$$
(3.7)

We have shown that \(d_k(\theta )<0\) (respectively, \(d_k(\theta )>0\)) if and only if \(\varSigma h(u)<M_k(\theta )\) (respectively, \(\varSigma h(u)>M_k(\theta )\)). Later on it will be more convenient to work with the reparametrizations \(R_k(\varTheta )=r_k(\theta ), N_k(\varTheta )=M_k(\theta ), \varTheta =(k+1)\theta \), of our maps \(r_k(\theta )\) and \(M_k(\theta )\). The theorem below summarizes our results:

Theorem 7

Let \(\varTheta \in (\frac{(k+1)\pi }{2k+1},\pi )\) be such that \(R_k(\varTheta )=r=h'(u)\). If \(\varSigma h(u)<N_k(\varTheta )\) (respectively, \(\varSigma h(u)>N_k(\varTheta )\)), then the equilibrium u of (1.8) exhibits a Neimark–Sacker supercritical (respectively, subcritical) bifurcation at \(\alpha =\alpha _k(\frac{\varTheta }{k+1})\).

4 On the Maps \(R_k\) and \(N_k\)

Here we recall some properties of the maps

$$\begin{aligned} R_k(\varTheta )=\frac{\cos \left( \frac{\varTheta }{2(k+1)}\right) }{\cos \left( \frac{(2k+1)\varTheta }{2(k+1)}\right) } \end{aligned}$$

and \(N_k(\varTheta )\) that will be useful later. In principle these maps just make sense in the interval \((\frac{(k+1)\pi }{2k+1},\pi )\), but they are well defined in \(\pi \) as well. Moreover:

Lemma 3

We have \(R_k(\pi )=-1\) and

$$\begin{aligned} R_k'(\pi )=\textstyle {\tan (\frac{\pi }{2(k+1)})}. \end{aligned}$$

In particular, \(R_k'(\pi )>0\) for any \(k\ge 1\).

Proof

The first statement \(R_k(\pi )=-1\) is clear. On the other hand,

$$\begin{aligned} R_k'(\varTheta )= & {} \frac{-\frac{1}{2(k+1)}\sin \left( \frac{\varTheta }{2(k+1)}\right) \cos \left( \frac{(2k+1)\varTheta }{2(k+1)}\right) + \frac{2k+1}{2(k+1)}\cos \left( \frac{\varTheta }{2(k+1)}\right) \sin \left( \frac{(2k+1)\varTheta }{2(k+1)}\right) }{\cos ^2\left( \frac{(2k+1)\varTheta }{2(k+1)}\right) } \\= & {} \frac{-\frac{1}{2(k+1)}\sin \varTheta +\cos \left( \frac{\varTheta }{2(k+1)}\right) \sin \left( \frac{(2k+1)\varTheta }{2(k+1)}\right) }{\cos ^2\left( \frac{(2k+1)\varTheta }{2(k+1)}\right) }\\= & {} \frac{\frac{k}{k+1}\sin \varTheta + \sin \left( \frac{k\varTheta }{k+1}\right) }{2\cos ^2\left( \frac{(2k+1)\varTheta }{2(k+1)}\right) }, \end{aligned}$$

hence

$$\begin{aligned} R_k'(\pi )=\frac{\sin \left( \frac{\pi }{k+1}\right) }{2\cos ^2\left( \frac{\pi }{2(k+1)}\right) }= \textstyle {\tan \left( \frac{\pi }{2(k+1)}\right) }. \end{aligned}$$

\(\square \)

Next we make \(N_k(\varTheta )\) explicit. Writing as in the previous section \(\alpha =\alpha _k(\theta ), \beta =\beta _k(\theta ), \mu _0=e^{i\theta }\) and \(\psi =\mu _{0}^{k+1}-k\beta \), we have

$$\begin{aligned} \frac{2\beta }{1-\alpha +\beta }= & {} \frac{-2 R_k(\varTheta )}{1-R_k(\varTheta )}\\= & {} \frac{-2\cos \left( \frac{\varTheta }{2(k+1)}\right) }{\cos \left( \frac{(2k+1)\varTheta }{2(k+1)}\right) -\cos \left( \frac{\varTheta }{2(k+1)}\right) }\\= & {} \frac{\cos \left( \frac{\varTheta }{2(k+1)}\right) }{ \sin \left( \frac{\varTheta }{2}\right) \sin \left( \frac{k\varTheta }{2(k+1)}\right) } \end{aligned}$$

and

$$\begin{aligned} \frac{\overline{\psi }}{\mathrm{Re}(\psi )}= & {} \frac{\overline{\mu _{0}^{k+1}-k\beta }}{\mathrm{Re}\left( \mu _{0}^{k+1}-k\beta \right) }\\= & {} \frac{e^{-i\varTheta }-\frac{k\sin \left( \frac{\varTheta }{k+1}\right) }{\sin \left( \frac{k\varTheta }{k+1}\right) }}{ \cos \varTheta -\frac{k\sin \left( \frac{\varTheta }{k+1}\right) }{\sin \left( \frac{k\varTheta }{k+1}\right) }}\\= & {} \frac{\sin \left( \frac{k\varTheta }{k+1}\right) e^{-i\varTheta }-k \sin \left( \frac{\varTheta }{k+1}\right) }{\sin \left( \frac{k\varTheta }{k+1}\right) \cos \varTheta -k\sin \left( \frac{\varTheta }{k+1}\right) }. \end{aligned}$$

Now, in view of (3.7), we get

$$\begin{aligned} N_k(\varTheta )=\frac{P_k(\varTheta )}{Q_k(\varTheta )}\left( 1-\frac{1}{2 U_k(\varTheta )}\mathrm{Re}\left( \frac{V_k(\varTheta )}{W_k(\varTheta )}\right) \right) , \end{aligned}$$

with

$$\begin{aligned} P_k(\varTheta )= & {} \textstyle {\cos (\frac{\varTheta }{2(k+1)})},\\ Q_k(\varTheta )= & {} \textstyle {\sin (\frac{\varTheta }{2})\sin (\frac{k\varTheta }{2(k+1)})},\\ U_k(\varTheta )= & {} \textstyle {\sin (\frac{k\varTheta }{k+1})\cos \varTheta -k\sin (\frac{\varTheta }{k+1})},\\ V_k(\varTheta )= & {} \textstyle {\sin (\frac{k\varTheta }{k+1})e^{-i\varTheta }-k\sin (\frac{\varTheta }{k+1})},\\ W_k(\varTheta )= & {} (1+e^{i\varTheta })(1+e^{\frac{ik\varTheta }{k+1}})-1. \end{aligned}$$

Lemma 4

We have \(N_k(\pi )=\frac{3}{2}\) and

$$\begin{aligned} N_k'(\pi )=\frac{1}{4}\textstyle {\left( 2\cos \left( \frac{\pi }{k+1} \right) -1\right) \tan \left( \frac{\pi }{2(k+1)}\right) }. \end{aligned}$$

In particular, \(N_1'(\pi )=-\frac{1}{4}<0, N_2'(\pi )=0\) and \(N_k'(\pi )>0\) for any \(k\ge 3\).

Proof

To simplify the notation we write \(P=P_k(\pi ), P'=P_k'(\pi )\); numbers \(Q, Q', U, U', V, V', W, W'\) are similarly defined. Clearly, we have

$$\begin{aligned} P=Q=\textstyle {\cos (\frac{\pi }{2(k+1)})}, U=V\hbox { and }W=-1. \end{aligned}$$
(4.1)

Then we get immediately

$$\begin{aligned} N_k(\pi )=\frac{P}{Q}\left( 1-\frac{1}{2U}\mathrm{Re}\left( \frac{V}{W}\right) \right) =\frac{3}{2}. \end{aligned}$$

On the other hand, we have

$$\begin{aligned} P'= & {} -\frac{1}{2(k+1)}\textstyle {\sin \left( \frac{\pi }{2(k+1)}\right) }, \end{aligned}$$
(4.2)
$$\begin{aligned} Q'= & {} \frac{k}{2(k+1)}\textstyle {\cos \left( \frac{k\pi }{2(k+1)}\right) } \nonumber \\= & {} \displaystyle {\frac{k}{2(k+1)}}\textstyle {\sin \left( \frac{\pi }{2(k+1)}\right) .} \end{aligned}$$
(4.3)

Also, observe that \(U_k(\varTheta )=\mathrm{Re}(V_k(\varTheta ))\), hence

$$\begin{aligned} U'=\mathrm{Re}(V'). \end{aligned}$$
(4.4)

Finally,

$$\begin{aligned} W'=-i\left( 1+e^{\frac{ik\pi }{k+1}}\right) , \end{aligned}$$

hence

$$\begin{aligned} \mathrm{Re}(W')=\textstyle {\sin \left( \frac{k\pi }{k+1}\right) = \sin \left( \frac{\pi }{k+1}\right) }. \end{aligned}$$
(4.5)

Therefore, using (4.1)-(4.5), we get

$$\begin{aligned} N_k'(\pi )= & {} \frac{P'Q-PQ'}{Q^2}\left( 1-\frac{1}{2U}\mathrm{Re}\left( \frac{V}{W}\right) \right) \\&\qquad +\frac{P}{2Q}\left( \frac{U'}{U^2}\mathrm{Re}\left( \frac{V}{W}\right) - \frac{1}{U}\mathrm{Re}\left( \frac{V'W-VW'}{W^2}\right) \right) \\= & {} \frac{3}{2}\frac{P'-Q'}{Q}-\frac{U'}{2U}+\frac{1}{2U}\mathrm{Re}(V'+VW')\\= & {} \frac{3}{2}\frac{P'-Q'}{Q}+\frac{1}{2}\mathrm{Re}(W')\\= & {} -\frac{3}{4}\frac{\sin \left( \frac{\pi }{2(k+1)}\right) }{\cos \left( \frac{\pi }{2(k+1)}\right) } +\frac{1}{2}\textstyle {\sin \left( \frac{\pi }{k+1}\right) }\\= & {} \frac{1}{4}\textstyle {\left( -3+4\cos ^2\left( \frac{\pi }{2(k+1)}\right) \right) \tan \left( \frac{\pi }{2(k+1)}\right) }\\= & {} \frac{1}{4}\textstyle {\left( 2\cos \left( \frac{\pi }{k+1}\right) -1\right) \tan \left( \frac{\pi }{2(k+1)}\right) } \end{aligned}$$

as we desired to show. \(\square \)

We already know that the maps \(R_k(\varTheta )\) are strictly increasing; our next aim is to study the monotonicity properties of the maps \(N_k(\varTheta )\). We need two elementary trigonometric lemmas.

Lemma 5

Let \(a,b\in \mathbb {R}\) and \(c=(1+e^{ia})(1+e^{ib})-1\). Then

$$\begin{aligned} \mathrm{Re}(c)= & {} -1+4\textstyle {\cos \left( \frac{a}{2}\right) \cos \left( \frac{b}{2}\right) \cos \left( \frac{a+b}{2}\right) },\\ \mathrm{Im}(c)= & {} 4\textstyle {\cos \left( \frac{a}{2}\right) \cos \left( \frac{b}{2}\right) \sin \left( \frac{a+b}{2}\right) },\\ |c|^2= & {} 1+8\textstyle {\cos \left( \frac{a}{2}\right) \cos \left( \frac{b}{2}\right) \cos \left( \frac{a-b}{2}\right) }. \end{aligned}$$

Proof

The first two statements follow after taking real and imaginary parts in

$$\begin{aligned} c=\textstyle {4e^{\frac{i(a+b)}{2}}\cos \left( \frac{a}{2}\right) \cos \left( \frac{b}{2}\right) -1}. \end{aligned}$$

Since \(c=e^{ia}+e^{ib}+e^{i(a+b)}\), we also get

$$\begin{aligned} \cos a +\cos b+\cos (a+b)=-1+4\textstyle {\cos \left( \frac{a}{2}\right) \cos \left( \frac{b}{2}\right) \cos \left( \frac{a+b}{2}\right) }, \end{aligned}$$

which implies

$$\begin{aligned} \cos a +\cos b+\cos (a-b)=-1+4\textstyle {\cos \left( \frac{a}{2}\right) \cos \left( \frac{b}{2}\right) \cos \left( \frac{a-b}{2}\right) }. \end{aligned}$$

as well. Then

$$\begin{aligned} |c|^2= & {} \left( e^{ia}+e^{ib}+e^{i(a+b)}\right) \left( e^{-ia}+e^{-ib}+e^{-i(a+b)}\right) \\= & {} 3+2(\cos a +\cos b +\cos (a-b))\\= & {} 1+8\textstyle {\cos \left( \frac{a}{2}\right) \cos \left( \frac{b}{2}\right) \cos \left( \frac{a-b}{2}\right) }, \end{aligned}$$

which prove the last statement of the lemma. \(\square \)

Lemma 6

We have \(\sin ^2(y\varTheta )-y\sin ^2\varTheta \ge 0\) for any \((\varTheta ,y)\in [\frac{\pi }{2},\pi ]\times [\frac{1}{2},1]\).

Proof

Let \(F(\varTheta ,y)=\sin ^2(y\varTheta )-y\sin ^2\varTheta \). The system

$$\begin{aligned} 0= & {} F_\varTheta (\varTheta ,y)=y\sin (2y\varTheta )-y\sin (2\varTheta )\\ 0= & {} F_y(\varTheta ,y)=\varTheta \sin (2y\varTheta )-\sin ^2\varTheta \end{aligned}$$

has no solution in the interior of the rectangle \([\frac{\pi }{2},\pi ]\times [\frac{1}{2},1]\), so all extrema of F are attained in its boundary. Now we have \(F(\varTheta ,1)=0, F(\pi ,y)=\sin ^2(y\pi )\ge 0, F(\frac{\pi }{2},y)=\sin ^2(\frac{y\pi }{2})-y\ge 0\) for any \(y\in [\frac{1}{2},1]\) (because the map \(\sin ^2(\frac{y\pi }{2})\) is concave in this interval) and

$$\begin{aligned} \textstyle {F(\varTheta ,\frac{1}{2}) =\sin ^2\left( \frac{\varTheta }{2}\right) }- \displaystyle {\frac{1}{2}}\textstyle {\sin ^2\varTheta =\sin ^2\left( \frac{\varTheta }{2}\right) \left( 1-2\cos ^2\left( \frac{\varTheta }{2}\right) \right) \ge 0} \end{aligned}$$

for any \(\varTheta \in [\frac{\pi }{2},\pi ]\). The lemma is proved. \(\square \)

Now we write

$$\begin{aligned} N_k(\varTheta )= & {} \frac{P_k(\varTheta )}{Q_k(\varTheta )}\left( 1-\frac{1}{2}\frac{\mathrm{Re}(V_k(\varTheta )\overline{W_k(\varTheta )})}{ \mathrm{Re}(V_k(\varTheta ))|W_k(\varTheta )|^2}\right) \\= & {} \frac{P_k(\varTheta )}{Q_k(\varTheta )}\left( 1-\frac{1}{2}\frac{\mathrm{Re}(W_k(\varTheta ))}{ |W_k(\varTheta )|^2} - \frac{1}{2} \frac{\mathrm{Im}(V_k(\varTheta ))}{ \mathrm{Re}(V_k(\varTheta ))}\frac{\mathrm{Im}(W_k(\varTheta ))}{|W_k(\varTheta )|^2}\right) \\= & {} D_k(\varTheta )C_k(\varTheta ) \end{aligned}$$

with

$$\begin{aligned} A_k(\varTheta )= & {} \frac{-\mathrm{Re}(W_k(\varTheta ))}{|W_k(\varTheta )|^2} =\frac{1-4\cos \left( \frac{\varTheta }{2}\right) \cos \left( \frac{k\varTheta }{2(k+1)}\right) \cos \left( \frac{(2k+1)\varTheta }{2(k+1)}\right) }{1+8\cos \left( \frac{\varTheta }{2}\right) \cos \left( \frac{k\varTheta }{2(k+1)}\right) \cos \left( \frac{\varTheta }{2(k+1)}\right) },\\ B_k(\varTheta )= & {} \frac{-\mathrm{Im}(V_k(\varTheta ))}{\mathrm{Re}(V_k(\varTheta ))} \frac{\mathrm{Im}(W_k(\varTheta ))}{|W_k(\varTheta )|^2}\\= & {} \frac{\sin \varTheta \sin \left( \frac{k\varTheta }{k+1}\right) }{\cos \varTheta \sin \left( \frac{k\varTheta }{k+1}\right) - k\sin \left( \frac{\varTheta }{k+1}\right) } \frac{4\cos \left( \frac{\varTheta }{2}\right) \cos \left( \frac{k\varTheta }{2(k+1)}\right) \sin \left( \frac{(2k+1)\varTheta }{2(k+1)}\right) }{1+8\cos \left( \frac{\varTheta }{2}\right) \cos \left( \frac{k\varTheta }{2(k+1)}\right) \cos \left( \frac{\varTheta }{2(k+1)}\right) },\\ C_k(\varTheta )= & {} 1+\frac{1}{2}A_k(\varTheta )+ \frac{1}{2}B_k(\varTheta ),\\ D_k(\varTheta )= & {} \frac{-2R_k(\varTheta )}{1-R_k(\varTheta )}= \frac{P_k(\varTheta )}{Q_k(\varTheta )}=\frac{\cos \left( \frac{\varTheta }{2(k+1)}\right) }{\sin \left( \frac{\varTheta }{2}\right) \sin \left( \frac{k\varTheta }{2(k+1)}\right) }; \end{aligned}$$

we have used Lemma 5.

Observe that the maps \(N_k(\varTheta )\) (and all the intermediate maps therein, with the exception of \(R_k(\varTheta )\)) can be seen as maps defined on the whole interval \([\frac{\pi }{2},\pi ]\), and so we will do in the sequel.

Lemma 7

Both \(N_1(\varTheta )\) and \(N_2(\varTheta )\) are strictly decreasing in \([\frac{\pi }{2},\pi ]\).

Proof

The map \(N_1(\varTheta )\) admits a very simple expression:

$$\begin{aligned} A_1(\varTheta )= & {} \frac{1-4\cos \left( \frac{\varTheta }{2}\right) \cos \left( \frac{\varTheta }{4}\right) \cos \left( \frac{3\varTheta }{4}\right) }{1+8\cos \left( \frac{\varTheta }{2}\right) \cos ^2\left( \frac{\varTheta }{4}\right) }\\= & {} \frac{1-2\cos \left( \frac{\varTheta }{2}\right) \left( \cos \varTheta +\cos \left( \frac{\varTheta }{2}\right) \right) }{1+4\cos \left( \frac{\varTheta }{2}\right) \left( 1+\cos \left( \frac{\varTheta }{2}\right) \right) }\\= & {} -\frac{\cos \varTheta }{1+2\cos \left( \frac{\varTheta }{2}\right) },\\ B_1(\varTheta )= & {} \frac{\sin \varTheta \sin \left( \frac{\varTheta }{2}\right) }{\cos \varTheta \sin \left( \frac{\varTheta }{2}\right) -\sin \left( \frac{\varTheta }{2}\right) } \frac{4\cos \left( \frac{\varTheta }{2}\right) \cos \left( \frac{\varTheta }{4}\right) \sin \left( \frac{3\varTheta }{4}\right) }{1+8\cos \left( \frac{\varTheta }{2}\right) \cos ^2\left( \frac{\varTheta }{4}\right) }\\= & {} \frac{\sin \varTheta }{\cos \varTheta -1} \frac{2\cos \left( \frac{\varTheta }{2}\right) \left( \sin \varTheta +\sin \left( \frac{\varTheta }{2}\right) \right) }{\left( 1+2\cos \left( \frac{\varTheta }{2}\right) \right) ^2}\\= & {} -\frac{\cos \left( \frac{\varTheta }{2}\right) }{\sin \left( \frac{\varTheta }{2}\right) } \frac{\sin \varTheta }{1+2\cos \left( \frac{\varTheta }{2}\right) }\\= & {} \frac{-2\cos ^2\left( \frac{\varTheta }{2}\right) }{1+2\cos \left( \frac{\varTheta }{2}\right) },\\ C_1(\varTheta )= & {} 1-\frac{\cos \varTheta }{2\left( 1+2\cos \left( \frac{\varTheta }{2}\right) \right) }- \frac{\cos ^2\left( \frac{\varTheta }{2}\right) }{1+2 \cos \left( \frac{\varTheta }{2}\right) }\\= & {} 1-\frac{4\cos ^2\left( \frac{\varTheta }{2}\right) -1}{2\left( 1+2\cos \left( \frac{\varTheta }{2}\right) \right) }\\= & {} \frac{3}{2} -\textstyle {\cos \left( \frac{\varTheta }{2}\right) },\\ D_1(\varTheta )= & {} \frac{\cos \left( \frac{\varTheta }{4}\right) }{\sin \left( \frac{\varTheta }{2}\right) \sin \left( \frac{\varTheta }{4}\right) } =\frac{1}{2 \sin ^2\left( \frac{\varTheta }{4}\right) } \end{aligned}$$

and

$$\begin{aligned} N_1(\varTheta )=\frac{3-2\cos \left( \frac{\varTheta }{2}\right) }{4\sin ^2\left( \frac{\varTheta }{4}\right) } =1+\frac{1}{4\sin ^2\left( \frac{\varTheta }{4}\right) }. \end{aligned}$$

This implies the lemma in the case \(k=1\).

We have been unable to find a short expression for \(N_2(\varTheta )\), but using some well-known trigonometric identities it is possible to write it in terms only of \(\cos (\frac{\varTheta }{3})\). After some rather cumbersome calculations, which we omit, we get

$$\begin{aligned} N_2(\varTheta )=\frac{3}{2}+\textstyle {G\left( \cos \left( \frac{\varTheta }{3}\right) \right) }, \end{aligned}$$

where

$$\begin{aligned} G(x)= \frac{(1-2x)^2 \left( -x+10x^2+4x^3+16x^4 +16x^5\right) }{(1-x)\left( 2-4x+16 x^3 +128x^5+128x^6\right) }. \end{aligned}$$

Now, proving that \(N_2(\varTheta )\) decreases in \([\frac{\pi }{2},\pi ]\) amounts to show that

$$\begin{aligned} H(y)=\textstyle {G\left( y+\frac{1}{2}\right) }= \displaystyle {\frac{y^2\left( 4+25 y+60 y^2+76 y^3+56 y^4+16 y^5\right) }{(1-2 y)\left( 1+9 y+38 y^2+82 y^3+100 y^4+64 y^5+16 y^6\right) }} \end{aligned}$$

increases in \([0,\frac{\sqrt{3}-1}{2}]\). But in fact such is the case in \([0,\frac{1}{2})\), because

$$\begin{aligned} H'(y)=\frac{y\left( 8+103 y+590 y^2+2116 y^3+5376 y^4+10224 y^5+14656 y^6+14944 y^7+9856 y^8+3584 y^9+512 y^{10}\right) }{(1-2 y)^2\left( 1+9 y+38 y^2+82 y^3+100 y^4+64 y^5+16 y^6\right) ^2}. \end{aligned}$$

\(\square \)

Remark 4

Observe that the map \(N_1(\varTheta )\) can be described in \((\frac{2\pi }{3},\pi ]\) in terms only of \(R_1(\varTheta )\):

$$\begin{aligned} R_1(\varTheta ) =\frac{\sin \left( \frac{\varTheta }{2}\right) }{\sin \varTheta -\sin \left( \frac{\varTheta }{2}\right) } =\frac{1}{2\cos \left( \frac{\varTheta }{2}\right) -1}, \end{aligned}$$

hence

$$\begin{aligned} \textstyle {\cos \left( \frac{\varTheta }{2}\right) } =\displaystyle {\frac{1+R_1(\varTheta )}{2R_1(\varTheta )}}. \end{aligned}$$

Therefore,

$$\begin{aligned} N_1(\varTheta )= & {} 1+\frac{1}{2\left( 1-\cos \left( \frac{\varTheta }{2}\right) \right. }\\= & {} 1+\frac{1}{2\left( 1-\frac{1+R_1(\varTheta )}{2R_1(\varTheta )}\right) }\\= & {} \frac{1-2R_1(\varTheta )}{1-R_1(\varTheta )}. \end{aligned}$$

In combination with Theorem 7, this implies Theorem 5.

If \(k\ge 3\), then the maps \(N_k(\varTheta )\) are not decreasing anymore (see Lemma 4); in fact, numerical estimations suggest that they have exactly one relative (and absolute) minimum, see Fig. 2. Apparently it is not possible to calculate these minima analytically, but at least we can “decompose” \(N_k(\varTheta )\) into increasing and decreasing factors as follows:

Fig. 2
figure 2

Some maps \(N_k(\varTheta )\) for different values of k; by “\(N_\infty (\varTheta )\)” we mean the limit map \(N(\varTheta ,0)\) given by (4.15). Observe that although “in general” \(N_\infty (\varTheta )\) is smaller than the other maps, this need not happen near \(\pi \)

Lemma 8

The maps \(D_k(\varTheta )\) (respectively, \(C_k(\varTheta )\)) are positive and strictly decreasing (respectively, strictly increasing) in \([\frac{\pi }{2},\pi ]\) for any \(k\ge 1\).

Proof

The statement is obvious for \(D_k(\varTheta )\); proving it for \(C_k(\varTheta )\) requires some work.

First we show that both \(A_k(\varTheta )\) and \(B_k(\varTheta )\), and hence \(C_k(\varTheta )\), are strictly increasing. To do this we will just check the sign of their derivatives in \([\frac{\pi }{2},\pi )\).

Let

$$\begin{aligned} A_k(\varTheta )=\frac{1-f(\varTheta )(\frac{f(\varTheta )}{2} -g(\varTheta ))}{1+2f(\varTheta )g(\varTheta )}= \frac{1}{2}+\frac{1-f^2(\varTheta )}{2+4f(\varTheta )g(\varTheta )} \end{aligned}$$

with \(f(\varTheta )=4\cos (\frac{k\varTheta }{2(k+1)})\cos (\frac{\varTheta }{2}), g(\varTheta )=\cos (\frac{\varTheta }{2(k+1)})\). Then

$$\begin{aligned} A_k'= & {} \frac{-ff'(1+2fg)-(1-f^2)(fg'+f'g)}{(1+2fg)^2}\\= & {} \frac{-ff'-f'g-fg'+f^2(fg'-f'g)}{(1+2fg)^2}. \end{aligned}$$

Since \(f>0, g>0, f'<0, g'<0\) and

$$\begin{aligned} fg'-f'g= & {} -\displaystyle {\frac{2}{k+1}}\, \textstyle {\cos \left( \frac{k\varTheta }{2(k+1)}\right) \cos \left( \frac{\varTheta }{2}\right) \sin \left( \frac{\varTheta }{2(k+1)}\right) }\\&\qquad +\,2\,\textstyle {\cos \left( \frac{k\varTheta }{2(k+1)}\right) \sin \left( \frac{\varTheta }{2}\right) \cos \left( \frac{\varTheta }{2(k+1)}\right) }\\&\qquad +\displaystyle {\frac{2k}{k+1}}\, \textstyle {\sin \left( \frac{k\varTheta }{2(k+1)}\right) \cos \left( \frac{\varTheta }{2}\right) \cos \left( \frac{\varTheta }{2(k+1)}\right) }\\= & {} 2\,\textstyle {\sin \left( \frac{(2k+1)\varTheta }{2(k+1)}\right) \cos \left( \frac{\varTheta }{2(k+1)}\right) } -\displaystyle {\frac{1}{k+1}}\sin \varTheta \\= & {} \displaystyle {\frac{k}{k+1}}\sin \varTheta +\textstyle {\sin \left( \frac{k\varTheta }{k+1}\right) }, \end{aligned}$$

we have \(A_k'>0\) as we desired to show.

Next we prove that \(\frac{-1}{B_k(\varTheta )}\) has positive derivative in \([\frac{\pi }{2},\pi )\), which also implies \(B_k'(\varTheta )> 0\) for any \(\varTheta \in [\frac{\pi }{2},\pi )\). Observe that

$$\begin{aligned} \frac{-1}{B_k(\varTheta )}=w(\varTheta )z(\varTheta ) \left( 2 +\frac{1}{f(\varTheta )g(\varTheta )}\right) , \end{aligned}$$

with

$$\begin{aligned} w(\varTheta )=\frac{k\sin \left( \frac{\varTheta }{k+1}\right) }{\sin \varTheta \sin \left( \frac{k\varTheta }{k+1}\right) }-\cot \varTheta \end{aligned}$$
(4.6)

and

$$\begin{aligned} z(\varTheta )=\frac{\cos \left( \frac{\varTheta }{2(k+1)}\right) }{\sin \left( \frac{(2k+1) \varTheta }{2(k+1)}\right) }. \end{aligned}$$
(4.7)

Since \(w\ge 0\), \(z\ge 0, f\ge 0, g\ge 0\) and \(f'\le 0, g'\le 0\), it suffices to show that \((wz)'> 0\). Now we have

$$\begin{aligned} \left( \frac{k\sin \left( \frac{\varTheta }{k+1}\right) }{\sin \varTheta \sin \left( \frac{k\varTheta }{k+1}\right) }\right) ' =\frac{k\, p(\varTheta )}{\sin ^2\varTheta \sin ^2\left( \frac{k\varTheta }{k+1}\right) } \end{aligned}$$

with

$$\begin{aligned} p(\varTheta )= & {} \frac{1}{k+1}\textstyle {\cos \left( \frac{\varTheta }{k+1}\right) \sin \varTheta \sin \left( \frac{k\varTheta }{k+1}\right) } \textstyle {-\sin \left( \frac{\varTheta }{k+1}\right) \cos \varTheta \sin \left( \frac{k\varTheta }{k+1}\right) }\\&\qquad -\frac{k}{k+1}\textstyle {\sin \left( \frac{\varTheta }{k+1}\right) \sin \varTheta \cos \left( \frac{k\varTheta }{k+1}\right) }\\= & {} \frac{1}{k+1}\sin ^2\varTheta -\textstyle {\sin \left( \frac{\varTheta }{k+1}\right) \sin \left( \frac{(2k+1)\varTheta }{k+1}\right) }\\= & {} \frac{1}{k+1}\sin ^2\varTheta -\frac{1}{2}\textstyle {\cos \left( \frac{2k\varTheta }{k+1}\right) } +\displaystyle {\frac{1}{2}\left( \cos ^2\varTheta -\sin ^2\varTheta \right) }\\= & {} -\frac{k}{k+1}\sin ^2\varTheta +\frac{1}{2}\textstyle {\left( 1-\cos \left( \frac{2k\varTheta }{k+1}\right) \right) }\\= & {} \textstyle {\sin ^2\left( \frac{k\varTheta }{k+1}\right) }-\displaystyle {\frac{k}{k+1}}\sin ^2\varTheta \\\ge & {} 0 \end{aligned}$$

by Lemma 6. Hence

$$\begin{aligned} w'(\varTheta )\ge \frac{1}{\sin ^2\varTheta }. \end{aligned}$$
(4.8)

Also, we have

$$\begin{aligned} z'(\varTheta )=\frac{q(\varTheta )}{\sin ^2\left( \frac{(2k+1)\varTheta }{2(k+1)}\right) } \end{aligned}$$
(4.9)

with

$$\begin{aligned} q(\varTheta )= & {} -\frac{1}{2(k+1)}\textstyle {\sin \left( \frac{\varTheta }{2(k+1)}\right) \sin \left( \frac{(2k+1)\varTheta }{2(k+1)}\right) }\nonumber \\&\displaystyle {-\frac{2k+1}{2(k+1)}}\textstyle {\cos \left( \frac{\varTheta }{2(k+1)}\right) \cos \left( \frac{(2k+1)\varTheta }{2(k+1)}\right) }\nonumber \\= & {} -\frac{1}{4(k+1)}\textstyle {\left( \cos \left( \frac{k\varTheta }{k+1}\right) - \cos \varTheta \right) }\nonumber \\&\displaystyle {-\frac{2k+1}{4(k+1)}}\textstyle {\left( \cos \left( \frac{k\varTheta }{k+1}\right) +\cos \varTheta \right) }\nonumber \\= & {} -\frac{k}{2(k+1)}\cos \varTheta -\frac{1}{2}\textstyle {\cos \left( \frac{k\varTheta }{k+1}\right) }. \end{aligned}$$
(4.10)

Combining (4.6)–(4.10) we conclude that, to prove \((wz)'>0\), it suffices to show that

$$\begin{aligned} \frac{1}{\sin ^2\varTheta } z(\varTheta )+ w(\varTheta )z'(\varTheta )= \frac{j(\varTheta )}{2\sin \varTheta \sin \left( \frac{k\varTheta }{k+1}\right) \sin ^2\left( \frac{(2k+1)\varTheta }{2(k+1)}\right) }>0, \end{aligned}$$

where

$$\begin{aligned} j(\varTheta )= & {} \sin \left( \frac{k\varTheta }{k+1}\right) + \displaystyle {\frac{\sin ^2\left( \frac{k\varTheta }{k+1}\right) }{\sin \varTheta }} -\left( \frac{k}{k+1}\cos \varTheta + \cos \left( \frac{k\varTheta }{k+1}\right) \right) \\&\left( k\sin \left( \frac{\varTheta }{k+1}\right) - \cos \varTheta \sin \left( \frac{k\varTheta }{k+1}\right) \right) . \end{aligned}$$

We must show \(j(\varTheta )>0\). After removing some nonnegative terms from \(j(\varTheta )\) (recall that \(\cos \varTheta \le 0\) because \(\varTheta \in [\frac{\pi }{2},\pi )\)), we are left to prove

$$\begin{aligned} \textstyle {\sin \left( \frac{k\varTheta }{k+1}\right) -\cos \left( \frac{k\varTheta }{k+1}\right) \left( k\sin \left( \frac{\varTheta }{k+1}\right) -\cos \varTheta \sin \left( \frac{k\varTheta }{k+1}\right) \right) }>0. \end{aligned}$$
(4.11)

Furthermore, (4.11) is equivalent to

$$\begin{aligned} t(\varTheta )>s(\varTheta ), \end{aligned}$$
(4.12)

with

$$\begin{aligned} t(\varTheta )=1+\textstyle {\cos \left( \frac{k\varTheta }{k+1}\right) \cos (\varTheta )} \end{aligned}$$

and

$$\begin{aligned} s(\varTheta )=\frac{k\sin \left( \frac{\varTheta }{k+1}\right) \cos \left( \frac{k\varTheta }{k+1}\right) }{\sin \left( \frac{k\varTheta }{k+1}\right) }. \end{aligned}$$

On the one hand we have, as an easy calculation shows,

$$\begin{aligned} t(\varTheta )\ge & {} 1+\textstyle {\cos \left( \frac{\varTheta }{2}\right) \cos (\varTheta )} =\textstyle {2\cos ^3\left( \frac{\varTheta }{2}\right) -\cos \left( \frac{\varTheta }{2}\right) +1} \nonumber \\\ge & {} \displaystyle {1-\frac{2}{3\sqrt{6}}}=0.7278\ldots . \end{aligned}$$
(4.13)

On the other hand,

$$\begin{aligned} s'(\varTheta )= k\frac{\cos \left( \frac{\varTheta }{k+1}\right) \sin \left( \frac{2k\varTheta }{k+1}\right) -2k\sin \left( \frac{\varTheta }{k+1}\right) }{2(k+1)\sin ^2\left( \frac{k\varTheta }{k+1}\right) }< 0 \end{aligned}$$
(4.14)

because

$$\begin{aligned} \textstyle {\sin \left( \frac{2k\varTheta }{k+1}\right) } <\displaystyle {\frac{2k\varTheta }{k+1}}<\textstyle {2k\tan \left( \frac{\varTheta }{k+1}\right) }. \end{aligned}$$

In view of (4.13) and (4.14), to get (4.12) we are left to show that

$$\begin{aligned} n(k)=k\sin \left( \frac{\pi }{2(k+1)}\right) \tan \left( \frac{\pi }{2(k+1)}\right) \end{aligned}$$

is bounded from above by \(1-\frac{2}{3\sqrt{6}}\). But this follows from \(n(1)=\frac{1}{\sqrt{2}}=0.7071\ldots \) and the fact that n(k), even when seen as a map defined on the whole interval \([1,\infty )\), is strictly decreasing:

$$\begin{aligned} n'(k)=\textstyle {\sin \left( \frac{\pi }{2(k+1)}\right) }\left( \textstyle {\tan \left( \frac{\pi }{2(k+1)}\right) } -\displaystyle {\frac{k\pi }{2(k+1)^2} \left( 1+\frac{1}{\cos ^2\left( \frac{\pi }{2(k+1)}\right) }\right) }\right) \end{aligned}$$

and

$$\begin{aligned} \frac{\tan \left( \frac{\pi }{2(k+1)}\right) }{\frac{\pi }{2(k+1)}} < \frac{4}{3}< \frac{k}{k+1} \left( 1+\frac{1}{\cos ^2\left( \frac{\pi }{2(k+1)}\right) }\right) \end{aligned}$$

for any \(k\ge 1\) (the left inequality holds because \(\frac{\tan x}{x}\) is increasing in \([0,\frac{\pi }{2})\); the second one is equivalent to \(\frac{3k}{k+4}>\cos ^2(\frac{\pi }{2(k+1)})\), which is easy to prove just comparing the corresponding derivatives).

We have just shown that \(C_k(\varTheta )\) is strictly increasing; to finish the proof we show that this map is positive, for which it is enough to check that \(C_k\left( \frac{\pi }{2}\right) >0\). First note that

$$\begin{aligned} \textstyle {-\mathrm{Re}(W_k\left( \frac{\pi }{2}\right) )}= & {} \textstyle {1-4\cos \left( \frac{\pi }{4}\right) \cos \left( \frac{k\pi }{4(k+1)}\right) \cos \left( \frac{(2k+1)\pi }{4(k+1)}\right) } \\\ge & {} \textstyle {1-4\cos \left( \frac{\pi }{4}\right) \cos \left( \frac{\pi }{8}\right) \cos \left( \frac{3\pi }{8}\right) } \\= & {} 0, \end{aligned}$$

hence \(A_k\left( \frac{\pi }{2}\right) \ge 0\). Moreover, \(0\le \frac{\mathrm{Im}(W_k(\varTheta ))}{|W_k(\varTheta )|^2}\le 1\) and

$$\begin{aligned} \frac{-\mathrm{Im}(V_k\left( \frac{\pi }{2}\right) )}{\mathrm{Re}(V_k\left( \frac{\pi }{2}\right) )} =\frac{-\sin \left( \frac{k\pi }{2(k+1)}\right) }{k\sin \left( \frac{\pi }{2(k+1)}\right) }>-2 \end{aligned}$$

because

$$\begin{aligned} \textstyle {\sin \left( \frac{k\pi }{2(k+1)}\right) }< \displaystyle {\frac{k\pi }{2(k+1)}} < \textstyle {2k\sin \left( \frac{\pi }{2(k+1)}\right) } \end{aligned}$$

for any \(k\ge 1\). Therefore, \(B_k\left( \frac{\pi }{2}\right) > -2\) and \(C_k\left( \frac{\pi }{2}\right) =1+\frac{1}{2}A_k\left( \frac{\pi }{2}\right) +\frac{1}{2}B_k\left( \frac{\pi }{2}\right) >0\) as we desired to prove. \(\square \)

The following simple map,

$$\begin{aligned} L_k(\varTheta )=\frac{2-R_k(\varTheta )}{1-R_k(\varTheta )}= \frac{\frac{1}{2}\cos \left( \frac{\varTheta }{2(k+1)}\right) -\cos \left( \frac{(2k+1)\varTheta }{2(k+1)}\right) }{\sin \left( \frac{\varTheta }{2}\right) \sin \left( \frac{k\varTheta }{2(k+1)}\right) } \end{aligned}$$

provides a decent approximation to \(N_k(\varTheta )\) near \(\pi \), especially for large values of k, because \(L_k(\pi )=\frac{3}{2}\) and \(L_k'(\pi )=\frac{1}{4}\tan \left( \frac{\pi }{2(k+1)}\right) \) (compare to Lemma 4). Moreover, it bounds \(N_k(\varTheta )\) from below:

Lemma 9

We have \(N_k(\varTheta )>L_k(\varTheta )\) for any \(\varTheta \in [\frac{\pi }{2},\pi )\) and any \(k\ge 1\).

Proof

Proving \(N_k(\varTheta )>L_k(\varTheta )\) is equivalent to show that

$$\begin{aligned} \frac{1+A_k(\varTheta )+\frac{2\cos \left( \frac{(2k+1)\varTheta }{2(k+1)}\right) }{\cos \left( \frac{\varTheta }{2(k+1)}\right) }}{-B_k(\varTheta )}>1, \end{aligned}$$

that is,

$$\begin{aligned} w\frac{4+6fg-f^2+\frac{4c}{g}-8fc}{2fv}>1; \end{aligned}$$

we are using the notation of Lemma 8, writing also \(v(\varTheta )=\sin \left( \frac{(2k+1)\varTheta }{2(k+1)}\right) \) and \(c(\varTheta )=\cos \left( \frac{(2k+1)\varTheta }{2(k+1)}\right) \). Now observe that

$$\begin{aligned} 4+6fg-f^2+\frac{4c}{g}-8fc=\frac{2f}{g}+3f^2-2fg>3f^2 \end{aligned}$$

because \(2(g+c)=f\). Since

$$\begin{aligned} \frac{fw}{v}=\frac{k\sin \left( \frac{\varTheta }{k+1}\right) -\cos \varTheta \sin \left( \frac{k\varTheta }{k+1}\right) }{\sin \left( \frac{(2k+1)\varTheta }{2(k+1)}\right) \sin \left( \frac{\varTheta }{2}\right) \sin \left( \frac{k\varTheta }{2(k+1)}\right) }, \end{aligned}$$

we only need to show that

$$\begin{aligned} \frac{k\sin \left( \frac{\varTheta }{k+1}\right) }{\sin \left( \frac{(2k+1)\varTheta }{2(k+1)}\right) \sin \left( \frac{\varTheta }{2}\right) \sin \left( \frac{k\varTheta }{2(k+1)}\right) } >\frac{2}{3}, \end{aligned}$$

but this is very easy because the left-hand denominator is smaller than 1, the map \(m(k)=k\sin \left( \frac{\varTheta }{k+1}\right) \) is increasing (as can be immediately checked calculating its derivative), and

$$\begin{aligned} m(1)=\textstyle {\sin \left( \frac{\varTheta }{2}\right) \ge \sin \left( \frac{\pi }{4}\right) } =\displaystyle {\frac{\sqrt{2}}{2}>\frac{2}{3}.} \end{aligned}$$

\(\square \)

With the obvious exception of Lemma 7, the only relevant property of k that we have used above is \(k\ge 1\). In particular, the definition and properties of maps \(R_k(\varTheta )\) and \(N_k(\varTheta )\) (and intermediate maps \(C_k(\varTheta )\) and \(D_k(\varTheta )\)) remain valid for all real numbers \(k\in [1,\infty )\). Thus it makes sense to define \(R(\varTheta ,y)=R_{1/y-1}(\varTheta ), C(\varTheta ,y)=C_{1/y-1}(\varTheta ), D(\varTheta ,y)=D_{1/y-1}(\varTheta )\) and \(N(\varTheta ,y)=N_{1/y-1}(\varTheta )\) for any \(0<y\le \frac{1}{2}\). Furthermore, these maps can be continuously extended to \(y=0\) by writing

$$\begin{aligned} R(\varTheta ,0)= & {} \frac{1}{\cos \varTheta },\\ C(\varTheta ,0)= & {} 1+ \frac{\frac{1}{2}-2\cos ^2\left( \frac{\varTheta }{2}\right) \cos \varTheta }{1+8\cos ^2\left( \frac{\varTheta }{2}\right) } + \frac{\sin ^2\varTheta }{\cos \varTheta \sin \varTheta -\varTheta } \frac{2\cos ^2\left( \frac{\varTheta }{2}\right) \sin \varTheta }{1+8\cos ^2\left( \frac{\varTheta }{2}\right) },\\ D(\varTheta ,0)= & {} \frac{1}{\sin ^2\left( \frac{\varTheta }{2}\right) } \end{aligned}$$

and

$$\begin{aligned} N(\varTheta ,0)= & {} D(\varTheta ,0)C(\varTheta ,0) \nonumber \\= & {} \frac{1}{\sin ^2\left( \frac{\varTheta }{2}\right) }\left( 1+ \frac{\frac{1}{2}-2\cos ^2\left( \frac{\varTheta }{2}\right) \cos \varTheta }{1+8\cos ^2\left( \frac{\varTheta }{2}\right) } \right. \nonumber \\&\quad \left. + \frac{\sin ^2\varTheta }{\cos \varTheta \sin \varTheta -\varTheta } \frac{2\cos ^2\left( \frac{\varTheta }{2}\right) \sin \varTheta }{1+8\cos ^2\left( \frac{\varTheta }{2}\right) }\right) . \end{aligned}$$
(4.15)

Thus we get well-defined continuous maps \(R(\varTheta ,y)\) and \(N(\varTheta ,y)\) whose domains are, respectively, the trapezium \(\{(\varTheta ,y): 0\le y\le \frac{1}{2}, \frac{\pi }{2-y}<\varTheta \le \pi \}\) and the rectangle \([\frac{\pi }{2},\pi ]\times [0,\frac{1}{2}]\). Finally, it is easy to check that the previously defined extensions have the same sign and monotonicity properties as the contiguous maps. We can resume our results as follows:

Lemma 10

\(R(\varTheta ,y)\) is negative and strictly increasing in \(\varTheta , C(\varTheta ,y)\) is positive and strictly increasing in \(\varTheta \), and \(D(\varTheta ,y)\) is positive and strictly decreasing in \(\varTheta \).

As it turns out, these monotonicity properties are reverted when the first variable \(\varTheta \) is fixed:

Lemma 11

Both \(R(\varTheta ,y)\) and \(C(\varTheta ,y)\) are decreasing in y, and \(D(\varTheta ,y)\) is increasing in y.

Proof

We have

$$\begin{aligned} R_y(\varTheta ,y)=-\frac{\varTheta \sin \varTheta }{2\cos ^2 \left( \frac{(2-y)\varTheta }{2}\right) } \end{aligned}$$

and

$$\begin{aligned} D_y(\varTheta ,y)=\frac{\varTheta \cos \left( \frac{\varTheta }{2}\right) }{2\sin \left( \frac{\varTheta }{2}\right) \sin ^2\left( \frac{(1-y)\varTheta }{2}\right) }, \end{aligned}$$

which settles the matter regarding \(R(\varTheta ,y)\) and \(D(\varTheta ,y)\).

To prove that \(C(\varTheta ,y)\) is y-decreasing we proceed similarly to Lemma 8, showing now that both \(A(\varTheta ,y)=A_{1/y-1}(\varTheta )\) and \(B(\varTheta ,y)=B_{1/y-1}(\varTheta )\) are y-decreasing. We keep using the notation there, thus \(f(\varTheta )=4\cos \left( \frac{k\varTheta }{2(k+1)}\right) \cos \left( \frac{\varTheta }{2}\right) \) becomes \(f(\varTheta ,y)=4\cos \left( \frac{(1-y)\varTheta }{2}\right) \cos \left( \frac{\varTheta }{2}\right) , g(\varTheta )=\cos \left( \frac{\varTheta }{2(k+1)}\right) \) becomes \(g(\varTheta ,y)=\cos \left( \frac{y\varTheta }{2}\right) \) and so on.

To check that \(A(\varTheta ,y)\) is y-decreasing it suffices to show that

$$\begin{aligned} -ff_y-f_yg-fg_y+f^2(fg_y-f_yg)\le 0. \end{aligned}$$

This follows from \(f\ge 0, g\ge 0, f_y\ge 0, g_y\le 0\) and

$$\begin{aligned} (fg)_y(\varTheta ,y)=\textstyle {2\varTheta \cos \left( \frac{\varTheta }{2}\right) \sin \left( \frac{(1-2y)\varTheta }{2}\right) } \ge 0. \end{aligned}$$

The proof that \(B(\varTheta ,y)\) is y-decreasing is simple as well: fg is y-increasing (as we have just shown) and both z and w are y-decreasing. In fact,

$$\begin{aligned} z_y(\varTheta ,y)=\frac{\varTheta \cos \varTheta }{2\sin ^2 \left( \frac{(2-s)\varTheta }{2}\right) }; \end{aligned}$$

w is y-decreasing because so is \(\frac{\sin x}{x}, x\in (0,\pi ]\). \(\square \)

5 Local and Global Attraction for Clark’s Equation

We devote this section to prove the main results of the paper.

Proof (Theorem 2)

If (a) or (b) holds, then the statement follows immediately from Theorem 7 and, respectively, Lemmas 7 and 9.

Assume that (c) is satisfied. Lemmas 10 and 11 imply that if \([a,b]\subset [\frac{\pi }{2},\pi ]\) and \(z\in [0,\frac{1}{2}]\), then

$$\begin{aligned} \min _{(\varTheta ,y)\in [a,b]\times [0,z]} N(\varTheta ,y)\ge D(b,0)C(a,z) \end{aligned}$$

whenever \(0\le y\le z\). In particular, if for any positive integer j and \(z\in [0,\frac{1}{2}]\) we define

$$\begin{aligned} n(j,z)=\min _{0\le i<j} \textstyle {D\left( \frac{\pi }{2}+\frac{(i+1)\pi }{2j},0\right) C\left( \frac{\pi }{2}+\frac{i\pi }{2j},z\right) }, \end{aligned}$$

then we have \(N(\varTheta ,y)\ge n(j,z)\) for any j, any \(\varTheta \in [\frac{\pi }{2},\pi ]\) and \(0\le y\le z\). Likewise,

$$\begin{aligned} n_k(j)=\min _{0\le i<j} \textstyle {D_k\left( \frac{\pi }{2}+\frac{(i+1)\pi }{2j}\right) C_k\left( \frac{\pi }{2}+\frac{i\pi }{2j}\right) } \end{aligned}$$

satisfies \(N_k(\varTheta )\ge n_k(j)\) for any j and any \(\varTheta \in [\frac{\pi }{2},\pi ]\). Now direct computations show

$$\begin{aligned} n(1000,0.005)=1.4904\cdots \end{aligned}$$

and

$$\begin{aligned} \min _{1\le k\le 198}n_k(200)=1.4906\cdots , \end{aligned}$$

hence \(N_k(\varTheta )>1.49\) for any k and \(\varTheta \). The statement of the theorem follows from Theorem 7.

Assume that (d) holds. For any positive integer j and any \(z\in [0,\frac{1}{2}]\), let \(i_{j,z}\) be the largest integer \(0\le i'<j\) with the property that

$$\begin{aligned} \min _{0\le i\le i'} \textstyle {D\left( \frac{\pi }{2}+\frac{(i+1)\pi }{2j},0\right) C\left( \frac{\pi }{2}+\frac{i\pi }{2j},z\right) }> \displaystyle {\frac{3}{2}} \end{aligned}$$

and define

$$\begin{aligned} r(j,z)=\textstyle {R\left( \frac{\pi }{2}+\frac{(i_{j,z}+1)\pi }{2j},z\right) }. \end{aligned}$$

In principle, it may happen that the point \((\frac{\pi }{2}+\frac{(i_{j,z}+1)\pi }{2j},z)\) does not belong to the domain of the map R; in such a case, write \(r(j,z)=-\infty \). Similarly, let \(i_{j,k}\) be the largest integer \(0\le i'<j\) with the property that

$$\begin{aligned} \min _{0\le i\le i'} \textstyle {D_k\left( \frac{\pi }{2}+\frac{(i+1)\pi }{2j}\right) C_k\left( \frac{\pi }{2}+\frac{i\pi }{2j}\right) }> \displaystyle {\frac{3}{2}} \end{aligned}$$

and define

$$\begin{aligned} r_k(j)=\textstyle {R_k\left( \frac{\pi }{2}+\frac{(i_{j,k}+1)\pi }{2j}\right) } \end{aligned}$$

(or \(r_k(j)=-\infty \) if \(\frac{\pi }{2}+\frac{(i_{j,k}+1)\pi }{2j}\le \frac{(k+1)\pi }{2k+1}\)).

Now we use again Lemmas 10 and 11: since \(R_k(\varTheta )\) is strictly increasing we have, for any given j, that \(N_k(\varTheta )>\frac{3}{2}\) whenever \(R_k(\varTheta )\le r_k(j)\). Analogously, \(N(\varTheta ,y)>\frac{3}{2}\) whenever \(R(\varTheta ,z)\le r(j,z)\) and \(0\le y\le z\). In particular (because \(R(\varTheta ,y)\) is decreasing in y), \(N(\varTheta ,y)>\frac{3}{2}\) whenever \(R(\varTheta ,y)\le r(j,z)\) and \(0\le y\le z\). We have, respectively,

$$\begin{aligned} \min _{1\le k\le 998} r_k(1200)=-1.1797\cdots , \end{aligned}$$

and

$$\begin{aligned} r(20000,0.001)=-1.17995\cdots . \end{aligned}$$

The statement then follows from Theorem 7. \(\square \)

Proof (Theorem 3)

Fix \(k\ge 3\). If the number \(\varepsilon _1>0\) is small enough, then, by writing \(D(0)=-1\) and applying (i), D can be seen as a decreasing diffeomorphism defined on \([0,\varepsilon _1]\). Now, applying Lemma 3 and using if necessary a smaller \(\varepsilon _1>0\), we get a decreasing diffeomorphism \(\varTheta \), also defined on \([0,\varepsilon _1]\), such that \(\varTheta (0)=\pi \) and \(D(\varepsilon )=R_{k}(\varTheta (\varepsilon ))\) for any \(\varepsilon \in [0,\varepsilon _1]\).

Since \(k\ge 3\), we have \((N_k\circ \varTheta )(0)=3/2\) and \((N_k\circ \varTheta )'(0)=N_k'(\pi )\varTheta '(0)<0\) by Lemma 4. Using (ii), we conclude that

$$\begin{aligned} \varSigma h_{\varepsilon }(u_{\varepsilon })=T(\varepsilon )>N_k(\varTheta (\varepsilon )) \end{aligned}$$

if \(\varepsilon >0\) is small enough, with \(\varTheta (\varepsilon )\) such that \(h_\varepsilon '(u_\varepsilon )=R_k(\varTheta (\varepsilon ))\). Thus, by Theorem 7, a subcritical Neimark–Sacker bifurcation arises at \(\alpha _k\left( \frac{\varTheta (\varepsilon )}{k+1}\right) = a_k(h_\varepsilon '(u_\varepsilon ))\), as we desired to prove. \(\square \)

Proof (Theorem 4)

From Lemmas 3 and 4 we get

$$\begin{aligned} \lim _{k\rightarrow \infty }\frac{N'_k(\pi )}{R'_k(\pi )}=1/4. \end{aligned}$$

Then there is \(k_0\) such that \(N'_k(\pi )/R'_k(\pi )>L=\lim _{\varepsilon \rightarrow 0}T'(\varepsilon )/D'(\varepsilon )\) for any \(k\ge k_0\). We show that this number \(k_0\) is adequate to our purposes.

Fix \(k\ge k_0\) and define \(\varepsilon _1\) and \(\varTheta (\varepsilon )\) in \([0,\varepsilon _1]\) as in the previous proof (note that now \(\varTheta '(0)=0\), so \(\varTheta \) is just a decreasing differentiable homeomorphism). We have

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}\frac{(N_k\circ \varTheta )'(\varepsilon )}{D'(\varepsilon )} = \lim _{\varepsilon \rightarrow 0}\frac{N_k'(\varTheta (\varepsilon ))}{R_k'(\varTheta (\varepsilon ))} =\frac{N'_k(\pi )}{R'_k(\pi )}>\lim _{\varepsilon \rightarrow 0}\frac{T'(\varepsilon )}{D'(\varepsilon )}. \end{aligned}$$

Then \(T'(\varepsilon )>(N_k\circ \varTheta )'(\varepsilon )\), and hence \(T(\varepsilon )>(N_k\circ \varTheta )(\varepsilon )\), if \(\varepsilon \) is sufficiently small. Thus, as before, a subcritical Neimark–Sacker bifurcation arises. \(\square \)

6 Some Examples

The supercritical case

We show that Theorem 2 applies to all maps listed in Table 1. For the Wright function \(h(x)=p(e^{-x}-1)\) this is obvious because \(\varSigma h(x)\equiv 1\) (recall Theorem 2(c)). If

$$\begin{aligned} \textstyle {h(x)=x(1+p(1-\left( \frac{x}{z}\right) ^q))}, \end{aligned}$$

then we have \(h'(u)=h'(z)=1-pq<-1\) when \(pq>2\). Since

$$\begin{aligned} \varSigma h(u)=\varSigma h(z)=\frac{1-q(1+p)+pq^2}{pq+pq^2}<1, \end{aligned}$$

the bifurcation is supercritical again by Theorem 2(c). Similarly, for the Ricker function

$$\begin{aligned} h(x)=pxe^{-qx} \end{aligned}$$

we have \(h'(u)=h'\left( \frac{\log p}{q}\right) =1-\log p<-1\) whenever \(p>e^2\). On the other hand,

$$\begin{aligned} \varSigma h(u)=\varSigma \textstyle {h\left( \frac{\log p}{q}\right) } =\displaystyle {1-\frac{1}{(\log p-2)^2}}<1. \end{aligned}$$

The Shepherd function

$$\begin{aligned} h(x)=\frac{p x}{1+x^q} \end{aligned}$$

demands some care. To get \(h'(u)=h'((p-1)^\frac{1}{q})=1+\frac{(1-p)q}{p}<-1\), the inequality \(\frac{1}{p}+\frac{2}{q}<1\) is now required. Also, we have

$$\begin{aligned} \varSigma h(u)=\varSigma h\left( (p-1)^\frac{1}{q}\right) =\frac{(p(q-1)-q)(6q^2-6pq^2+p^2(q^2-1))}{(p-1)(p(q-1)-2q)^2q}. \end{aligned}$$

Although h has negative Schwarzian derivative, \(\varSigma h(u)\) may be very close to \(\frac{3}{2}\) when p is very large and q is very close to 2, as can be seen more clearly when examining the map

$$\begin{aligned} F(t,s)=\frac{(2-s-2t)(4-s^2-24t+24t^2)}{2(1-t)(2-s-4t)^2} \end{aligned}$$

which follows after replacing p by \(\frac{1}{t}\) and q by \(\frac{2}{s}\) in \(\varSigma h(u)\) (hence \(t+s<1\) now). Still, we have the possibility of using Theorem 2(b). And indeed, after getting

$$\begin{aligned} G(t,s)=\frac{2+s-2t}{2(1-t)} \end{aligned}$$

similarly replacing p and q by t and s in \(\frac{2-h'(u)}{1-h'(u)}\), we find that

$$\begin{aligned} G(t,s)-F(t,s)=\frac{ 2t((2-s-2t)^2+2ts)}{(1-t)(2-s-4t)^2} \end{aligned}$$

is positive.

Fig. 3
figure 3

A local, but not global attractor, for Clark’s equation

The subcritical case

A simple example fulfilling all hypotheses of Theorem 3 is given by

$$\begin{aligned} h_\varepsilon (x)= \frac{1}{(1 - 2 \varepsilon ) (\varepsilon + (1 - \varepsilon ) x) + 2 \varepsilon (\varepsilon + (1 - \varepsilon ) x)^2}, \quad 0<\varepsilon <1/2,\quad I=[0,\infty ). \end{aligned}$$
(6.1)

Clearly, \(h_\varepsilon \) is bounded and \(h_\varepsilon '(x)<0\) for any \(x\in [0,\infty )\). Also, we have

$$\begin{aligned} Sh_\varepsilon (x)=-\frac{24(1-\varepsilon )^2\varepsilon ^2}{(1+(4x-2)\varepsilon +4(1-x)\varepsilon ^2)^2}<0 \end{aligned}$$

for any \(x\in [0,\infty )\). Its only fixed point is \(u_\varepsilon =u=1\) and \(h_\varepsilon '(u)=-1 - \varepsilon + 2 \varepsilon ^2 <-1\). Moreover,

$$\begin{aligned} T(\varepsilon )=\frac{3 (1+2 \varepsilon )^2 \left( 1+4 \varepsilon ^2\right) }{2 \left( 1+2 \varepsilon +4 \varepsilon ^2\right) ^2}, \end{aligned}$$
$$\begin{aligned} T'(\varepsilon )=\frac{48 \varepsilon ^3-12 \varepsilon }{\left( 1+2 \varepsilon +4 \varepsilon ^2\right) ^3}. \end{aligned}$$

Figure 3 illustrates Theorem 3 for the maps (6.1). We take \(k=3\) and calculate \(\varepsilon \) (and hence r) so that \(\theta _3(r)=\frac{\pi }{4}-0.001\). Namely, we get \(\varepsilon =0.00167086\) and \(r=-1.00166528\). Also, we have \(a_3(r)=0.00563994\). Finally we fix \(\alpha =a_3(r)+0.0001\).

Now, for \(k=3\), the map \(h=h_\varepsilon \) and these parameters \(\alpha \), we depict pairs \((x_{n+1},x_n)\) for orbits of (1.8) starting at several initial conditions \((x_0,x_{-1},x_{-2},x_{-3})\). The closed curve is the bidimensional projection of the unstable invariant curve promised by the subcritical Neimark–Sacker bifurcation. It has been painted using thick light grey points, which corresponds to the first 800 iterates of the orbit starting (approximately) at

$$\begin{aligned} (1.898919, 1.570831, 0.995705,0.638023). \end{aligned}$$

“Inside” this curve, eight pairs of more or less “parallel arcs” can be counted. Those closer to the invariant curve corresponds to the first 400 iterates of the orbit

$$\begin{aligned} (1.8, 1.570831, 0.995705,0.638023); \end{aligned}$$

the other eight correspond to iterations from 1,00,000 to 1,00,400 of the same orbit; all of them consist of smaller, dark grey points. We have used even smaller, now black points, to analogously simulate the orbit starting at the point

$$\begin{aligned} (2, 1.570831,0.995705,0.638023). \end{aligned}$$

Finally, big black dots at (1.8, 1.570831), (1.898919, 1.570831), (2, 1.570831) and (1, 1) indicate these different initial conditions and the equilibrium.

The picture also shows some pairs near the axes. They correspond to the orbit starting at

$$\begin{aligned} (1.219971,0.0768226, 0.00488285,0.0308514) \end{aligned}$$

which, apparently, is contained in another (stable) invariant curve of the equation. The existence of this second curve is not anticipated by our results but it is not surprising because Clark’s equation is permanent, as mentioned in Sect. 2. Moreover, this curve and the point (1, 1, 1, 1) seem to be the only metric attractors of the equation.