Keywords

1 Dynamical Systems and Their Classification

The concept of dynamical system that we will use here is taken from R.E. Kalman [1] who introduced it in the 1960s while studying the problem of linear filtering and prediction.

Roughly speaking, a system consists of a set of the so-called states (generally vectors of real numbers), where the adjective dynamics emphasizes the fact that these states vary in time according to a suitable dynamical law. This concept of dynamical system is cast in the following definition.

Definition 2.1 (Dynamical System)

A dynamical system is an entity defined by the following axioms:

  1. 1.

    There exist an ordered set T of times, a set X of states and a function ϕ from T × T × X to X. ϕ is called a state transition function.

  2. 2.

    For all t, τ ∈ T and for all x ∈ X one has that ϕ(t, τ, x) represents the state at time t of a system whose initial state at time τ is x.

  3. 3.

    The function ϕ satisfies the following properties:

    Consistency::

    ϕ(τ, τ, x) = x for all τ ∈ T, and for all x ∈ X.

    Composition::

    \(\phi (t_3, t_1, x) = \phi \bigl (t_3, t_2,\phi (t_2, t_1,x)\bigr )\), for all x ∈ X and for all t 1, t 2, t 3 ∈ T with t 1 < t 2 < t 3.

In the following we always consider \(X = \mathbb {R}^n\).

Definition 2.2 (Reversibility)

If the state transition function ϕ defined for any (t, τ) in T × T, once assigned the initial time τ and the initial state x, the state of the system is uniquely determined for the future (i.e. for all t > τ), as well as for the past (i.e. for t < τ), the system is said to be reversible.

If the state transition function ϕ is defined only for t ≥ τ, then the system is said to be irreversible.

Definition 2.3 (Event, Orbit and Flow)

For all t ∈ T, x ∈ X, the pair (t, x) is called an event. Moreover, for τ and x fixed, the function t ∈ Tϕ(t, τ, x) ∈ X is called a movement of the system. The set of all movements is called a flow. The image of the movement, i.e. the set

$$\displaystyle \begin{aligned} \Bigl\{ \ \phi (t,\tau,x) \ \bigm| \ t\in T \ \Bigr\}, \end{aligned}$$

is called an orbit (or a trajectory) of the system, i.e. the orbit passing through the state x at time τ.

It is not always possible to find a closed formula for the orbits of dynamical systems, but it is possible to study the behaviour of the orbits for long time nonetheless.

Definition 2.4 (Fixed or Equilibrium Point)

A state x ∈ X is called a fixed point (or an equilibrium point) of the dynamics, if there exist t 1, t 2 ∈ T, with t 2 > t 1, such that

$$\displaystyle \begin{aligned} \phi(t,t_1,x^*) = x^* \, , \qquad \text{for all }t\in T \cap [t_1,t_2] \, . \end{aligned}$$

x is said to be a fixed point in an infinite time if there exists t 1 > T such that

$$\displaystyle \begin{aligned} \phi(t,t_1,x^*) = x^* \qquad \text{for all }t\in T \cap \left[t_1,+\infty\right[. \end{aligned}$$

Definition 2.5 (Eventually Fixed Orbit)

An orbit is said to be eventually fixed if it contains a fixed point.

Definition 2.6 (Eventually Fixed Point)

A point is called eventually fixed if its orbit is eventually fixed.

Definition 2.7 (Stability)

The fixed point x is stable if for every ε > 0 there exist δ > 0 and t 0 ∈ T such that for all x ∈ X with |x − x |≤ δ,

$$\displaystyle \begin{aligned} \big|\phi(t,\tau,x) - x^*\big| \le \varepsilon \qquad \text{holds for any }t > t_0. \end{aligned}$$

The fixed point x is asymptotically stable if it is stable and there exists a δ > 0 such that for all x ∈ X with |x − x |≤ δ it holds that

$$\displaystyle \begin{aligned} \lim_{t\to\infty} \big|\phi(t,\tau,x)-x^*\big| = 0 \qquad \text{holds}. \end{aligned}$$

The fixed point x is globally asymptotically stable if it is stable and

$$\displaystyle \begin{aligned} \lim_{t\to\infty} \big|\phi(t,\tau,x) - x^*\big| = 0 \, , \qquad \text{for any }\tau \in T\ \text{and}\ x \in X \, . \end{aligned}$$

Definition 2.8 (Autonomous System)

The system is called autonomous if

$$\displaystyle \begin{aligned} \phi (t,\tau,x) = \widetilde{\phi} (t-\tau,x) \end{aligned} $$
(2.1)

for some suitable function \(\widetilde {\phi }\).

That is, an autonomous system does not explicitly depend on the independent variable. If the variable is time (t), the system is called time-invariant. For example, the classical harmonic oscillator yields to an autonomous system. A nonautonomous system of n ordinary first order differential equations can be changed into an autonomous system, by enlarging its dimension using a trivial component, often of the form x n+1 = t.

Definition 2.9 (Discrete and Continuous System)

The system is called discrete, if the time set T is a subset of the set of the integers \(\mathbb {Z} = \{ \dotsc ,-3,-2,-1,0,1,2,3,\dotsc \}\).

The system is called continuous if T is an interval of real numbers.

In Sect. 2.2.1 we consider continuous-time dynamical systems, and in Sect. 2.5 we consider discrete-time dynamical systems.

2 Continuous-Time Dynamical

2.1 Continuous-Time Dynamical Systems from Ordinary Differential Equations

Let \(I=[a,b]\subset \mathbb {R}\) and let \(f\colon I\times \mathbb {R}\to \mathbb {R}\).

We recall the following version of the Cauchy–Lipschitz Theorem (see Bonsante and Da Prato[3]).

Theorem 2.1 (Cauchy–Lipschitz)

Assume that there exists L > 0 such that

$$\displaystyle \begin{aligned} \big|f(t,x_1)-f(t,x_2)\big| \le L |x_1-x_2| \, , \end{aligned} $$
(2.2)

for any t  I and\(x_1,x_2\in \mathbb {R}\) . Then for any τ  I,\(\xi \in \mathbb {R}\)the Cauchy problem

$$\displaystyle \begin{aligned} \begin{cases} \dot{x}(t) = f\bigl(t,x(t)\bigr) & \, , t \in I \, , \\ x(\tau) = \xi \, \end{cases} \end{aligned} $$
(2.3)

has a unique solution in [a, b].

From Theorem 2.1 it follows that the ordinary differential equation

$$\displaystyle \begin{aligned} \dot{x} = f(t,x) \end{aligned} $$
(2.4)

defines a continuous reversible dynamic system. In fact the time set is T = I, the state set is \(X=\mathbb {R}\) and the state transition function ϕ is the function from \(I\times I\times \mathbb {R}\) to \(\mathbb {R}\) such that for all t, τ ∈ I, \(\xi \in \mathbb {R}\) one has that

$$\displaystyle \begin{aligned} \phi(t,\tau,\xi)=x(t) \, , \end{aligned}$$

where x(t) is the unique solution of the Cauchy problem (2.3). In this case the movements are the solutions to Eq. (2.4) and, for any solution x, the corresponding orbit is on the interval \(\bigl \{ x(t) \bigm | t \in I \bigr \}\).

The system is autonomous if, and only if, the function f does not depend explicitly on t, that is we have

$$\displaystyle \begin{aligned} \dot{x}(t) = f\bigl(x(t)\bigr) \, \end{aligned}$$

(i.e. in the case of a differential equation of the form \(\dot {x}=f(x)\), with \(f\colon \mathbb {R}\to \mathbb {R}\) derivable function with continuous and bounded derivative), since in this case one has

$$\displaystyle \begin{aligned} \phi(t,\tau, x) = \phi(t-\tau,0,x) \qquad \text{ for all } \quad t,\tau,x \in \mathbb{R}. \end{aligned} $$
(2.5)

An equilibrium point is a solution of the differential equation \(\dot {x} = f(x)\), which is constant on the interval J = [t 1, t 2] ⊂ I. Hence, the equilibrium points of the system are the solutions \(x^* \in \mathbb {R}\) of the equation f(x) = 0.

2.2 Continuous-Time Dynamical Systems from Systems of Ordinary Differential Equations

The discussion contained in the previous Sect. 2.2.1 for a single equation can be extended to systems of ordinary differential equations.

In fact, if \(\boldsymbol {x} = (x_1,x_2,\dotsc ,x_n)\in \mathbb {R}^n\), let f = f(t, x) be a vector function from \(I \times \mathbb {R}^n\) to \(\mathbb {R}^n\), and let f 1, f 2, …, f n be the components of f.

Assume that f 1, f 2, …, f n are continuous functions in \(I \times \mathbb {R}^n\), that the partial derivatives of f 1, f 2, …, f n with respect to all variables x 1, x 2, …, x n exist and are continuous in \(I\times \mathbb {R}^n\), and that these partial derivatives are bounded in \([a,b] \times \mathbb {R}^n\) for all [a, b] ⊂ I.

Then, for all t 0 ∈ I and \(\boldsymbol {x}_0 \in \mathbb {R}^n\) the Cauchy problem

$$\displaystyle \begin{aligned} \begin{cases} \dot{\boldsymbol{x}}(t) = \boldsymbol{f}\bigl(t,\boldsymbol{x}(t)\bigr) \, , & t \in I \, , \\ \boldsymbol{x}(t_0) = \boldsymbol{x}_0 \, \end{cases}{} \end{aligned} $$
(2.6)

has one and only one solution on the interval I.

Therefore, the system of ordinary differential equations

$$\displaystyle \begin{aligned} \dot{\boldsymbol{x}}(t) = \boldsymbol{f}\bigl(t,\boldsymbol{x}(t)\bigr) \end{aligned}$$

defines a reversible continuous dynamical system.

The time set is T = I, the state set is \(X = \mathbb {R}^n\) and the state transition function is the mapping ϕ from \(I \times I \times \mathbb {R}^n \) to \(\mathbb {R}^n\) such that for all t, τ ∈ I, \(\boldsymbol {x} \in \mathbb {R}^n\) one has that ϕ(t, τ, x) is the value in t of the unique solution of the Cauchy problem (2.6).

In this case, the movements are solutions of the system (2.6) and, for any solution x(t) of such a system of differential equations, the corresponding orbit is a curve in \(\mathbb {R}^n\) of the parametric equation x = x(t), t ∈ I.

As before, the system is autonomous if f is independent of t, i.e. in the case of a system of differential equations of the form \( \dot {\boldsymbol {x}} = \boldsymbol {f}(\boldsymbol {x})\). In this case, an equilibrium point is a solution of the system of differential equations \(\dot {\boldsymbol {x}} = \boldsymbol {f}(\boldsymbol {x})\) that is constant on an interval J ⊂ I. Thus, the equilibrium points of the system are the solutions \(\boldsymbol {x}^*\in \mathbb {R}^n\) of the system of equations f(x ) = 0.

Remark 2.1

A nonautonomous system of n ordinary first order differential equations can be changed into an autonomous system, by enlarging its dimension using a trivial component, often of the form x n+1 = t.

Remark 2.2

The notion of dynamical system, as outlined in Definition 2.1, describes the case in which the evolution of the system depends only on internal causes.

However, there are situations where the evolution of the system can be modified through the action of external forces, i.e. by means of a time-dependent input vector function u. In this case Definition 2.1 can be generalized in the sense that a dynamic system is characterized by a time set T, a state set X, an input set U with a set Ω of admissible input functions from T to U and a state transition function ϕ from T × T × X × Ω to X such that for all t, τ ∈ T, x ∈ X, u ∈ Ω, ϕ(t, τ, x, u) represents the state of the system at time t, if the state is x at time τ with an input function u acting on the system.

Obviously, the state of the system at time t will only depend on the initial time τ, the initial state x and the restriction of the input function u to the interval of extremes t and τ. Hence, we have to assume that the state transition function ϕ satisfies the following properties.

Consistency::

ϕ(τ, τ, x, u) = x(t) ∀ (τ, x, u(⋅)) ∈ T × X × Ω.

Composition::

\(\phi (t_3,t_1,\boldsymbol {x},\boldsymbol {u})=\phi \left ( t_3,t_2,\phi \left ( t_2,t_1,\boldsymbol {x},\boldsymbol {u} \right ) ,\boldsymbol {u} \right ) \) for each (x, u) ∈ X × Ω, and for each t 1 < t 2 < t 3.

Causality::

If u, v ∈ Ω and u|[τ,t] = v|[τ,t], then ϕ(t, τ, x, u) = ϕ(t, τ, x, v).

This framework can be used for theoretical approaches in continuous-time systems.

3 Stability of Continuous-Time Systems

We recall some known facts about the exponential of square matrix.

Definition 2.10 (Exponential of Square Matrix)

Given a square matrix A, the exponential of A is defined by

$$\displaystyle \begin{aligned} \exp(A) = \sum_{j=0}^{+\infty} \frac{1}{j!} A^j = I + A + \frac{1}{2} A^2 + \frac{1}{6} A^3 + \dotsb + \frac{1}{j!} A^j + \dotsb \, . \end{aligned}$$

The basic properties of the exponential are listed below:

  • If 0 is the null matrix, then \(\exp (0) = I\).

  • If A and B commute, that is AB = BA, then \(\exp (A) \, \exp (B) = \exp (A+B)\).

    In particular \(\exp (\alpha A) \, \exp (\beta A) = \exp \bigl ((\alpha +\beta ) A\bigr )\), for any \(\alpha ,\beta \in \mathbb {R}\).

  • For any square matrix A, \(\exp (A)\) is invertible; moreover

    $$\displaystyle \begin{aligned} \bigl[\exp(A)\bigr]^{-1} = \exp(-A) \, . \end{aligned}$$
  • \(\exp (A^T) = \exp (A)^T\).

  • \(\det \bigl (\exp (A)\bigr ) = e^{\operatorname {tr}(A)}\), where \(\operatorname {tr}(A)\) denotes the trace of A.

  • If B = PAP −1, where P is an invertible matrix, then

    $$\displaystyle \begin{aligned} \exp(B) = P\,\exp(A)\,P^{-1} \, . \end{aligned}$$
  • If A is diagonal

    $$\displaystyle \begin{aligned} A = \begin{pmatrix} \lambda_1 & 0 & & \ldots & 0 \\ 0 & \lambda_2 & 0 & \\ \vdots & & \ddots & & \vdots \\ 0 & \ldots & & 0 & \lambda_n \\ \end{pmatrix} \, , \end{aligned}$$

    then

    $$\displaystyle \begin{aligned} \exp(A) = \begin{pmatrix} e^{\lambda_1} & 0 & & \ldots & 0 \\ 0 & e^{\lambda_2} & 0 & \\ \vdots & & \ddots & & \vdots \\ 0 & \ldots & & 0 & e^{\lambda_n} \\ \end{pmatrix} \, . \end{aligned}$$

When we deal with a linear system of differential equations expressed in matrix form as

$$\displaystyle \begin{aligned} \dot{\boldsymbol{x}}=A \boldsymbol{x} \, , \end{aligned}$$

A being a fixed matrix, the solution for the initial point x 0 at t = 0 is given by

$$\displaystyle \begin{aligned} \boldsymbol{x}(t) = \exp(tA) \, \boldsymbol{x}_0\,. \end{aligned}$$

Indeed, as

$$\displaystyle \begin{aligned} \boldsymbol{x}(t) = \sum_{j=0}^{+\infty} \frac{t^j}{j!} A^j \, \boldsymbol{x}_0 \end{aligned}$$

we have

$$\displaystyle \begin{aligned} \dot{\boldsymbol{x}}(t) & = \sum_{j=1}^{+\infty} \frac{t^{j-1}}{(j-1)!} A^j \, \boldsymbol{x}_0 = \sum_{j=0}^{+\infty} \frac{t^j}{j!} A^{j+1} \, \boldsymbol{x}_0 \\ & = A\,\sum_{j=0}^{+\infty} \frac{t^j}{j!} A^j \, \boldsymbol{x}_0 = A\,\boldsymbol{x}(t) \, . \end{aligned} $$

We can obtain the behaviour of the solution x(t) by studying the eigenvalues of the matrix A. Indeed, assume for example that A is diagonalizable and all the eigenvalues λ j, j = 1, …, n, have negative real part. We then have

$$\displaystyle \begin{aligned} \boldsymbol{x}(t) = \exp(tA) \, \boldsymbol{x}_0 = P \begin{pmatrix} e^{\lambda_1\,t} & 0 & & \ldots & 0 \\ 0 & e^{\lambda_2\,t} & 0 & \\ \vdots & & \ddots & & \vdots \\ 0 & \ldots & & 0 & e^{\lambda_n\,t} \\ \end{pmatrix} P^{-1} \boldsymbol{x}_0 \, , \end{aligned}$$

for some invertible matrix P. As \(e^{\lambda _n\,t}\to 0\) for t → +, we see that the solution x(t) = 0 is stable.

The Hartman–Grobman Theorem 2.2 given below will elucidate the behaviour around the fixed points of nonlinear systems by a linearization in a neighbourhood of the equilibrium. To this end, we need to introduce the following definitions (Zimmerman [11]).

Definition 2.11 (Homeomorphism)

A function h: X → Y is a homeomorphism between X and Y if it is continuous and bijective (one-to-one and onto function) with a continuous inverse denoted by h −1.

Remark 2.3

A homeomorphism means that X and Y have similar structure and that h (resp., h −1) may stretch and bend the space but does not tear it.

Definition 2.12 (Diffeomorphism)

A function \(f\colon U \subseteq \mathbb {R}^n \rightarrow V \subseteq \mathbb {R}^n\) is called diffeomorphism of class\(\mathcal {C}^{k}\) if it is surjective (onto) and injective (one-to-one), and if the components of f and its inverse have continuous partial derivatives up to the k-th order with respect to all variables.

Definition 2.13 (Embedding)

An embedding is a homeomorphism onto its image.

Definition 2.14 (Topological Conjugacy)

Given two maps, f : X → X and g: Y → Y , the map h: X → Y is a topological semi-conjugacy if it is continuous, surjective and h ∘ f = g ∘ h, with ∘ function composition.

In addition, if h is a homeomorphism between X and Y , then we say that h is a topological conjugacy and that X and Y are homomorphic.

Definition 2.15 (Hyperbolic Fixed Point)

In the case of continuous-time dynamical system,

$$\displaystyle \begin{aligned} \dot{\boldsymbol{x}} = \boldsymbol{f} (\boldsymbol{x}), \end{aligned}$$

a hyperbolic fixed point is a fixed point x for which all the eigenvalues of the Jacobian matrix

$$\displaystyle \begin{aligned} D\boldsymbol{f} = \begin{pmatrix} \partial_{x_1} f_1 & \partial_{x_1} f_2 & \dotsc & \partial_{x_1} f_n \\ \partial_{x_2} f_1 & \partial_{x_2} f_2 & \dotsc & \partial_{x_2} f_n \\ \vdots & & & \\ \partial_{x_n} f_1 & \partial_{x_n} f_2 & \dotsc & \partial_{x_n} f_n \end{pmatrix} \end{aligned}$$

calculated in x have a non-zero real part.

Theorem 2.2 (Hartman–Grobman)

Letfbe\(\mathcal {C}^1\)on some\(E \subset \mathbb {R}^n\)and letx be a hyperbolic fixed point that without loss of generality we can assumex  = 0 . Consider the nonlinear system\(\dot {\boldsymbol {x}} = \boldsymbol {f} (\boldsymbol {x})\)with flow ϕ(t, 0, x) and the linear system\(\dot {\boldsymbol {x}}=A\boldsymbol {x}\) , where A is the Jacobian D f(0). Let\(I_0 \subset \mathbb {R}, X \subset \mathbb {R}^n\)and\(Y \subset \mathbb {R}^n\)such that X, Y and I 0each contain the origin. Then, there exists a homeomorphism H : X  Y such that for all initial pointsx 0 ∈ X and all t  I 0

$$\displaystyle \begin{aligned} H\bigl(\phi(t,0,\boldsymbol{x_0})\bigr) = e^{tA} H(\boldsymbol{x_0}) \end{aligned}$$

holds. Thus, the flow of the nonlinear system is homeomorphic to e tA (i.e. to the flow of the linearized system).

A sufficient condition for an equilibrium x a to be stable is given by the following theorem.

Theorem 2.3 (Lyapunov [8])

Let Ω be an open subset of\(\mathbb {R}^n\) , and\(\boldsymbol {f}\colon \varOmega \to \mathbb {R}^n\)be a\(\mathcal {C}^1\)function. Letx a ∈ Ω be a zero off.

Consider the dynamical system

$$\displaystyle \begin{aligned} \dot{\boldsymbol{x}}(t) = \boldsymbol{f}\bigl(\boldsymbol{x}(t)\bigr) , \qquad \boldsymbol{x} \in \mathbb{R}^n \, . \end{aligned}$$

The equilibriumx(t) = x ais stable if all the eigenvalues of the Jacobian matrix offatx ahave a negative real part.

We end this section recalling a useful tool to prove the stability of equilibria.

Definition 2.16 (Lyapunov Function)

Let \(\boldsymbol {f} \colon \mathbb {R}^n \to \mathbb {R}^n\), with f(x 0) = 0, and consider the autonomous dynamical system

$$\displaystyle \begin{aligned} \dot{\boldsymbol{x}}(t) = \boldsymbol{f}\bigl(x(t)\bigr) \, , \end{aligned}$$

so that x(t) ≡x 0 is an equilibrium point.

A weak Lyapunov function (resp., a strong Lyapunov function) with respect to x 0 is a scalar \(\mathcal {C}^1\) function L defined in a neighbourhood \(\mathcal {U}\) of x 0 such that:

  • L(x 0) = 0 and L(x 0) > 0 for all \(x \in \mathcal {U}\setminus \{\boldsymbol {x}_0\}\);

  • V (x) ⋅f(x) > 0 (resp., ∇V (x) ⋅f(x) ≥ 0) for all \(x \in \mathcal {U}\setminus \{\boldsymbol {x}_0\}\).

Theorem 2.4

If there exists a weak Lyapunov function (resp., a strong Lyapunov function) with respect to the point x 0 , then x 0 is Lyapunov stable. (resp., asymptotically stable).

4 Limit Cycles and Periodicity of Continuous-Time Systems

In this section we consider a continuous-time dynamical system described by the state transition function ϕ(t, τ, x). If differently specified, the following definitions and results are taken from R. Devaney [5], H. W. Lorenz [7] and S. Sternberg [10]

Definition 2.17 (Limit Cycle)

A limit cycle (see Fig. 2.1) is a closed orbit Γ for which there exists a tubular neighbourhood U(Γ) [9] such that for all x ∈ U(Γ) one has

$$\displaystyle \begin{aligned} \lim\limits _{t \to +\infty} d(\phi(t,\tau,x) , \varGamma ) = 0 , \end{aligned} $$
(2.7)

where we have

$$\displaystyle \begin{aligned} d(y, \varGamma) = \inf \limits _{z\in \varGamma}|y-z|. \end{aligned} $$
(2.8)
Fig. 2.1
figure 1

A limit cycle in the three-dimensional phase space

In order to establish the existence of limit cycles, in the two-dimensional case, we can refer to the following theorem by Poincaré and Bendixson.

Theorem 2.5 (Poincaré–Bendixson [7])

Let D be a non-empty, compact (i.e. closed and bounded) set of the plane not containing fixed points of a\(\mathcal {C}^1 \)vector fieldffrom D to\(\mathbb {R}^2\)and let γ  D be an orbit of the system\(\dot {\boldsymbol {x}} = \boldsymbol {f(x)}\) . Then, either γ is a closed orbit or γ asymptotically approaches a closed orbit (i.e. there exists a limit cycle in D).

The limitations of Theorem 2.5 are related to finding a suitable set D and to the fact that it is valid only in two dimensions. For example, we suppose that there exists a compact set \(D \subset \mathbb {R}^{3}\) with the vector field pointing inwards to D (see Fig. 2.2b) and that there is a unique unstable equilibrium. Nevertheless, it is possible that no closed orbit exists because a trajectory can arbitrarily wander in \(\mathbb {R}^{3}\) without neither intersecting itself nor approaching a limit set (see Fig. 2.3).

Fig. 2.2
figure 2

Convergence to the limit cycle. On the boundary of D, the vector field points inwards the set. Therefore, once a trajectory enters in D, it will stay on it forever. (a) A system with a stable limit cycle in a vector field. (b) Limit cycle in a compact set D [7]

Fig. 2.3
figure 3

In \( \mathbb {R}^3\) Poincaré–Bendixson is invalid

It is a simple consequence of the theorem of Poincaré and Bendixson together with the uniqueness of the solutions of such systems that while two-dimensional systems can produce limit cycles, for obtaining chaos, dimension three is required.

The following result, in contrast to Theorem 2.5, provides a criterion to establish the non-existence of closed orbits of a dynamical system in \(\mathbb {R}^2\):

Theorem 2.6 (Bendixson Negative Criterion [2])

Let

$$\displaystyle \begin{aligned} B_R=\bigl\{(x,y)\in\mathbb{R}^2\bigm| x^2+y^2<R\bigr\} , \end{aligned}$$

with R > 0, and let\(f,g\in \mathcal {C}^1(B_R)\)be such that

$$\displaystyle \begin{aligned} \partial_x f(x,y) + \partial_y g(x,y)\end{aligned} $$

has constant sign and vanishes only at a finite number of points.

Then there exists no closed orbits in B R of the autonomous system

$$\displaystyle \begin{aligned} \begin{cases} x'=f(x,y) \\ y'=g(x,y) \, . \end{cases} \end{aligned}$$

Theorem 2.6 holds true in a more general setting: we can replace B R by a generic simply connected subset of \(\mathbb {R}^2\), and assume that x f(x, y) +  y g(x, y) has constant sign except a set with zero measure.

In the one-dimensional systems, there are no periodic solutions. To put it another way, trajectories increase or decrease monotonically, or remain constant. What is more, the Poincaré–Bendixson theorem provides an important result in the two-dimensional systems. It states that if a trajectory is confined to a closed and bounded region that contains no equilibrium points, then the trajectory must eventually approach a closed orbit. This result implies that chaotic attractors cannot happen in nonlinear planar dynamical systems.

5 Discrete-Time Dynamical Systems

Definition 2.18 (Map)

Let X be a subset of \(\mathbb {R}^d\), d ≥ 1, and let f : X → X be any function. The recursive formula

$$\displaystyle \begin{aligned} x_{n+1} = f(x_n) \end{aligned} $$
(2.9)

defines a discrete dynamical system referred to as a d-dimensional map.

If we denote by the symbol f n the n-th iterate of f, i.e. for n = 0 f ∘0 is the identity on X and for n ≥ 1 the composition of f with itself n times, that is

$$\displaystyle \begin{aligned} f^{\circ n}(x) = \begin{cases} x & \text{if }n=0 \, , \\ f(x) & \text{if }n=1 \, , \\ (f\circ f^{\circ (n-1)})(x) = f\bigl(f^{\circ (n-1)}(x)\bigr) & \text{if }n>1 \, , \end{cases} \end{aligned} $$
(2.10)

then the state transition function ϕ is defined by

$$\displaystyle \begin{aligned} \phi(t,\tau,x) = f^{\circ (t-\tau)} (x) \qquad \text{for all }t,\tau \in \mathbb{N},\text{ and }t \ge \tau , \end{aligned} $$
(2.11)

since it is evident that ϕ satisfies the consistency and composition properties.

In this case, a movement is a sequence \(\{x_n\}_{n\in \mathbb {N}}\) such that x n+1 = f(x n) for all n, whereas an orbit is a set of the form {x 0, x 1, x 2, …, x n, …} with x n+1 = f(x n) for all \(n\in \mathbb {N}\).

In the following we focus on 1-dimensional dynamical systems.

Example 2.1

If d = 1, and f(x) = x 2, the orbit of f with initial point x 0 = 2 is the set \( \{ 2,4,16, 256, \dotsc \} = \{2^{2^n}\}_{n\in \mathbb {N}}\).

Remark 2.4

Note that the dynamical system defined by Eq. (2.9) is autonomous (cfr. Eq. (2.1)). Moreover it is reversible if and only if the function f is bijective.

Indeed, if f −1 is the inverse of f, we can extend the definition of f n also to negative n by

$$\displaystyle \begin{aligned} f^{\circ (-n)} = (f^{-1})^{\circ n} \, , \qquad \text{for }n\in\mathbb{N} \, ; \end{aligned}$$

hence, (2.11) holds true also for t < τ.

Example 2.2

If f(x) = x 2, as f is not injective, the associated dynamical system is not reversible: by knowing x 1 = 1, one cannot deduce if the initial point is x 0 = 1 or x 0 = −1.

Definition 2.19 (Fixed or Equilibrium Point for a Discrete-Time System)

A fixed point (or a equilibrium point) x is a point of X such that f(x ) = x . In this discrete-time case, the orbit departing from x is the singleton {x }.

Example 2.3

Let f(x) = x 2.

The points 0 and 1 are the only fixed points for f. The point x = −1 is not fixed, but it is an eventually fixed point for f because f(−1) = 1≠ − 1.

Definition 2.20 (Periodic orbit, cycle)

The orbit of initial point x 0 is said to be periodic, or a cycle, if there exists \(p\in \mathbb {N}\) such that f p(x 0) = x 0. In this case, x 0 is a periodic (or a cyclic) point.

The smallest number p such that f p(x 0) = x 0 is called the period of x 0 (or of its orbit). To emphasize the period p, we say that x 0 (or its orbit) is a p-periodic point (or a p-periodic orbit).

Remark 2.5

A periodic orbit means that after a finite number of iterations we return to the initial point and therefore the orbit has a finite number of elements.

Remark 2.6 (Period of an Orbit)

A point x 0 is periodic of period p if and only if x 0 is a fixed point of f p. In particular, a fixed point x 0 for f is fixed for all iterates of f.

Often, fixed points are also called period-1 fixed points.

Example 2.4

If f(x) = −x, then x 0 = 0 is the only fixed point of f and all other points have period 2, the orbits being the sets of the form {x, −x}.

Definition 2.21 (Eventually Periodic Orbit)

An orbit is said to be eventually periodic if it contains a periodic point. Analogously, a point is called eventually periodic if its orbit is eventually periodic.

Example 2.5

Consider the function f(x) = 1 − x 4. The point x 0 = −1 is an eventually periodic point because f(−1) = 0 and 0 is contained in the cycle (0, 1).

Remark 2.7 (Recursive Methods for Finding Fixed Points)

Recursive expressions of the form of Eq. (2.9) are often used in numerical computations for solving equations. An example is given by the so-called Babylonian algorithm to approximate the square root of a number a > 0

$$\displaystyle \begin{aligned} x_{n+1} = \frac{1}{2} \left( x_{n} + \frac{a}{x_{n}} \right). \end{aligned} $$
(2.12)

A more general algorithm is the Newton method of approximating zero of a differentiable function g as

$$\displaystyle \begin{aligned} x_{n+1} = x_{n} - \frac{g(x_{n})}{g'(x_{n})} \, . \end{aligned} $$
(2.13)

For example, if g(x) = x 2 − a, g′(x) = 2x, then the Newton algorithm in Eq. (2.13) reduces to Eq. (2.12).

5.1 Cobweb Diagram

For a one-dimensional map f, the cobweb diagram (or Verhulst diagram) is a method to graphically describe the orbit of an initial point x 0.

After drawing the graph of f(x) and the bisector r of first and third quadrants, draw the point \(P_0 \equiv \bigl (x_0,f(x_0)\bigr )\).

Let x 1 = f(x 0), draw the horizontal line from P 0 to the point on r with coordinates (x 1, x 1) and draw the vertical line from this point to the graph of f with coordinates \(P_1 \equiv \bigl (x_1,f(x_1)\bigr )\).

For higher iterates we repeat the procedure. From x 2 = f(x 1), we draw the horizontal line from P 1 to the point on r with coordinates (x 2, x 2) and the vertical line from this point to the graph f with coordinates \(P_2 \equiv \bigl (x_2,f(x_2)\bigr )\) (Fig. 2.4).

Fig. 2.4
figure 4

The cobweb diagram

6 Attractors and Repellers

In this section, we provide some definitions concerning the behaviour of dynamical systems that, unless differently specified, follow the conventions in R. Devaney [5], H. W. Lorenz [7] and S. Sternberg [10].

Throughout the section \(f\colon \mathbb {R}\to \mathbb {R}\) is a twice continuously differentiable function, and f n denotes its n-th iterate (cf. (2.10)).

Definition 2.22 (Critical Point)

We say that \(x_c \in \mathbb {R}\) is a critical point of f if f′(x c) = 0.

The critical point x c is degenerate if f″(x c) = 0 and non-degenerate if f″(x c)≠0.

Remark 2.8

Degenerate critical points may be maxima, minima or inflection points; non-degenerate critical points, instead, must be either maxima or minima.

Example 2.6

Consider the functions f n(x) = x n, \(n\in \mathbb {N}\).

If n ≥ 2, then f n has a critical point in c = 0. In particular, if n = 2 the critical point is non-degenerate, whereas if n > 2 the critical point is degenerate.

If n is even, the critical point is a minimum, whereas if n is odd the critical point is an inflection point.

Remark 2.9 (Classification of Critical Points [4])

A critical point can be stable if the orbit of the system is inside a bounded neighbourhood to the point for all times n after some n . A point is asymptotically stable if it is stable and the orbit approaches the critical point as n →. If a critical point is not stable, then it is unstable (see Fig. 2.5). In some instances these critical points could have mixed characteristics (see Fig. 2.6).

Fig. 2.5
figure 5

Different examples of critical points. From top left clockwise: a centre denoting a stable but not asymptotically stable point, an asymptotically stable node and an asymptotically stable spiral both denoted as a sink. Then an unstable spiral and an unstable node, i.e. source. Last figure shows a saddle node where some orbits converge and some others diverge

Fig. 2.6
figure 6

A critical point combining a sink (I and II quadrants) and saddle (III and IV quadrants)

Definition 2.23 (Limit Set)

Given x ∈ X, the limit set of x is the set A of points ω ∈ X for which there is an increasing sequence of natural numbers \(\{n_j\}_{j\in \mathbb {N}}\) such that

$$\displaystyle \begin{aligned} \lim_{j \to +\infty} f^{\circ n_j}(x) = \omega \, . \end{aligned}$$

Definition 2.24 (Attractor)

A compact (i.e. a closed and bounded) set A ⊂ X is an attractor if there is an open set U containing A such that A is a limit set of all points in U.

Definition 2.25 (Basin of Attraction)

The set of all x having A as limit set is called the basin of attraction of A.

Remark 2.10

In particular, a singleton {x a} is an attractor if there exists δ > 0 such that for all x ∈ ]x a − δ, x a + δ[ the sequence \(\bigl (f^{\circ n}(x)\bigr )_{n\in \mathbb {N}}\) has a subsequence converging to x a.

Theorem 2.7

Let x abe a fixed point of f with |f′(x a)| < 1. Then a is asymptotically stable and the set {x a} is an attractor.

More precisely there exists δ > 0 such that for all x ∈ ]x a − δ, x a + δ[ the sequence (f n(x))ntends to x a.

Proof

Let \(K\in \mathbb {R}\) be such that |f′(x a)| < K < 1, as

$$\displaystyle \begin{aligned} \lim_{x \to x_a} \frac {\big|f(x) - x_a\big|}{|x-x_a|} = \lim_{x \to x_a} \Big| \frac {f(x) - f(a)}{x-x_a} \Big| = \big|f'(x_a)\big| < K \end{aligned}$$

and since there exists δ > 0 such that for all x ∈ ]x a − δ, x a + δ[, one has

$$\displaystyle \begin{aligned} \Big| \frac {f(x) - f(a)}{x-x_a} \Big| < K \, . \end{aligned}$$

Hence

$$\displaystyle \begin{aligned} \big|f(x) - x_a\big| < K |x-x_a| < K \delta \, . \end{aligned} $$
(2.14)

As K < 1 we deduce that if \(x_0 \in \left ]x_a-\delta ,x_a+\delta \right [\), then \(x_1 = f(x_0) \in \left ]x_a-\delta ,x_a+\delta \right [\).

Applying (2.14) to x = x n, we have

$$\displaystyle \begin{aligned} |x_{n+1} - x_a\big| < K |x_n-x_a| \, , \end{aligned}$$

as x n+1 = f(x n). Thus it follows by induction that for all \(x_0\in \left ]x_a-\delta ,x_a+\delta \right [\) and for all \(n\in \mathbb {N}\) one has that

$$\displaystyle \begin{aligned} |x_n - x_a\big| < K^n |x_0-x_a| < K^n \delta \, . \end{aligned}$$

Hence, for all \(x_0 \in \left ]x_a-\delta ,x_a+\delta \right [\) the distance of f n(x 0) from a decreases at a geometric rate K < 1 and therefore tends to 0, as desired.

Remark 2.11

If one has f′(0) = 0, then the preceding argument shows that the distance of f n(x) from a decreases at a geometric rate K for all \(K \in \left ]0,1\right [\).

The above remark justifies the following definition.

Definition 2.26 (Superattractor)

A fixed point x such that f′(x ) = 0 is called a superattractor or superstable.

Remark 2.12

If |f′(x a)| > 1, then, for a fixed \(K \in \mathbb {R}\) such that 1 < K < |f′(x a)|, there exists δ > 0 such that for all x ∈ ]x a − δ, x a + δ[ one has |f(x) − x a| > K|x − x a|; hence the distance of f n(x) from a increases at a geometric rate K > 1, and therefore there exists \(n\in \mathbb {N}\) such that |f n(x) − x a| > δ.

This motivates the following definition.

Definition 2.27 (Repeller)

A fixed point x with |f′(x )| > 1 is called unstable or a repeller.

Example 2.7

Let us consider a function g twice continuously differentiable and the Newton method of Eq. (2.13). In this case

$$\displaystyle \begin{aligned} f(x)=x-\frac{g(x)}{g'(x)}, \end{aligned} $$
(2.15)

and hence

$$\displaystyle \begin{aligned} f'(x)=1-\frac{g'(x)}{g'(x)} +\frac{g(x)g^{\prime\prime}(x)}{g'(x)^2} = \frac{g(x)g^{\prime\prime}(x)}{g'(x)^2}. \end{aligned} $$
(2.16)

If the point x a is a non-degenerate zero of g, then x a is a superattractive fixed point.

Remark 2.13

As already mentioned above, a periodic point x p of a period n is a fixed point of f n (n-fold composition of f).

Moreover, if x p is periodic, the points

$$\displaystyle \begin{aligned} x_p, f(x_p), f^{\circ 2} (x_p),\dotsc,f^{\circ n-1} (x_p) \end{aligned} $$
(2.17)

are also periodic, and by the chain rule, the derivative of f n in those points is the same and equal to

$$\displaystyle \begin{aligned} (f^{\circ n})' (x_p) = f'(x_p) \, f'(f(x_p)) \cdots f' (f^{\circ n-1}(x_p)). \end{aligned} $$
(2.18)

Definition 2.28 (Hyperbolic Bifurcation Point)

Let x p be a periodic point of prime period n (see Remark 2.6), and the point x p is called hyperbolic if

$$\displaystyle \begin{aligned} \lvert ({f^{\circ n}})' (x_p) \rvert \neq 1. \end{aligned} $$
(2.19)

The number (f n)(x p) is called a hyperbolic point multiplier.

Definition 2.29 (Bifurcation Point)

A non-hyperbolic fixed point is called a bifurcation point.

Definition 2.30 (Attractive Periodic Orbit)

If x p is an attractive (resp., a repeller) fixed point for f n, then so are all others, and it is called an attractive periodic orbit.

Definition 2.31 (Superattractive Periodic Orbit)

A periodic point is superattractive for f n if and only if f′(s) = 0 at least for one of the points x p, f(x p), f ∘2(x p), …, f n−1(x p).

Example 2.8

As was mentioned above, an attractor as well as a repeller can be a fixed or a periodic point. For example, the function f(x) = −x 3 has two cyclic points − 1 and + 1 of period 2 and a fixed one x 0 = 0 (see Fig. 2.7a).

Fig. 2.7
figure 7

Convergence to the attractor. Panel (a) represents f(x) = −x 3 that is a mirror image of f(x) = x 3 and panel (b) corresponds to the graph of f ∘2(x) = x 9

It can easily be verified that x 0 is an attractor for the basin (−1, +1) and that the cyclic orbit −1, +1 is a repeller. To show that it is sufficient to study the function f ∘2(x) for which − 1 and + 1 are fixed repeller points, and since neither is fixed for f, then they will be cyclic repellers (see Fig. 2.7b).

We end this section with the n-dimensional version of Theorem 2.7.

Theorem 2.8

Consider the discrete-time dynamical system

$$\displaystyle \begin{aligned} \boldsymbol{x}_{n+1} = \boldsymbol{f}(\boldsymbol{x}_n) \, , \end{aligned}$$

where \(\boldsymbol {f}\colon \mathbb {R}^n\to \mathbb {R}^n\) is a smooth map. Let x a be a fixed point of f , that is

$$\displaystyle \begin{aligned} \boldsymbol{f}(\boldsymbol{x}_a)=\boldsymbol{x}_a \, , \end{aligned}$$

and assume that the eigenvalues of the Jacobian matrix of f

$$\displaystyle \begin{aligned} D\boldsymbol{f} = \begin{pmatrix} \partial_{x_1} f_1 & \partial_{x_1} f_2 & \dotsc & \partial_{x_1} f_n \\ \partial_{x_2} f_1 & \partial_{x_2} f_2 & \dotsc & \partial_{x_2} f_n \\ \vdots & & & \\ \partial_{x_n} f_1 & \partial_{x_n} f_2 & \dotsc & \partial_{x_n} f_n \end{pmatrix} \end{aligned}$$

calculated atx alie inside the unit circle\(\bigl \{z\in {\mathbf C}\bigm |\|z\|<1\bigr \}\).

Then x a is an attractor.

7 Existence of Periodic Behaviour

In this section we present some definitions that will be used in Sect. 3.1.

7.1 Schwarz Derivative

Definition 2.32 (Schwarz Derivative)

Let f be a one-dimensional map defined in the real field, three times derivable. The Schwarz derivative of f is defined by

$$\displaystyle \begin{aligned} f^S(x)& = \frac{d}{dx} \left(\frac{f''(x)}{f'(x)} \right) - \frac{1}{2} \left( \frac{f''(x)}{f'(x)} \right)^2 \, ,{} \end{aligned} $$
(2.20)

or, equivalently, by

$$\displaystyle \begin{aligned} f^S(x)& = \frac{f'''(x)}{f'(x)} - \frac{3}{2} \left( \frac{f''(x)}{f'(x)} \right)^2 \, . {} \end{aligned} $$
(2.21)

The relevant property of the Schwarz derivative is to preserve the sign with the composition, in the sense that if f S(x) > 0, then it is also \((f^{\circ n})^S (x)>0 \quad \forall n \in \mathbb {N}\). To prove this statement we first prove a “chain rule” formula for the Schwarz derivative.

Lemma 2.1

Let f, g be three times derivable, and then

$$\displaystyle \begin{aligned} (f \circ g)^S(x) = f^S \bigl(g(x)\bigr) \bigl(g'(x)\bigr)^2 + g^S (x) \, . \end{aligned} $$
(2.22)

Proof

According to the (ordinary) chain rule, we have

$$\displaystyle \begin{aligned} (f \circ g)'(x) & = f'\bigl(g(x)\bigr) g'(x) \\ (f \circ g)''(x) & = f''\bigl(g(x)\bigr) \bigl(g'(x)\bigr)^2 + f'\bigl(g(x)\bigr) g''(x) \\ (f \circ g)'''(x) & = f'''\bigl(g(x)\bigr) \bigl(g'(x)\bigr)^3 + 3 f''\bigl(g(x)\bigr) g'(x) g''(x) + f'\bigl(g(x)\bigr) g'''(x) \, . \end{aligned} $$

Hence,

$$\displaystyle \begin{aligned} (f \circ g)^S(x) & = \frac{f'''\bigl(g(x)\bigr) \bigl(g'(x)\bigr)^3 + 3 f''\bigl(g(x)\bigr) g'(x) g''(x) + f'\bigl(g(x)\bigr) g'''(x)} {f'\bigl(g(x)\bigr) g'(x)} \\ & \quad - \frac{3}{2} \left( \frac{f''\bigl(g(x)\bigr) \bigl(g'(x)\bigr)^2 + f'\bigl(g(x)\bigr) g''(x)} {f'(x)} \right)^2 \\ & = \left[\frac{f'''\bigl(g(x)\bigr)}{f'\bigl(g(x)\bigr)} - \frac{3}{2} \left( \frac{f''\bigl(g(x)\bigr)}{f'\bigl(g(x)\bigr)}\right)^2 \right] \bigl(g'(x)\bigr)^2 \\ & \quad + \frac{g'''(x)}{g'(x)} - \frac{3}{2} \left(\frac{g''(x)}{g'(x)} \right)^2 \\ & = f^S \bigl(g(x)\bigr) \bigl(g'(x)\bigr)^2 + g^S(x) \, . \end{aligned} $$

By (2.22) , if f S < 0 and g S < 0, then (fg)S < 0 . In particular, if f S is negative, then (f n)S is also negative for all n > 1. This yields the following theorem (for illustration see Ref. [5]).

Theorem 2.9 (Schwarz Theorem)

If f S < 0 and if f has n critical points, then f has at most n + 2 attracting periodic orbits.

From (2.21) we see that if Q is a polynomial of degree at most 2, then Q S < 0. For higher degree polynomials, we have the following proposition.

Proposition 2.1

Let Q(x) be a polynomial whose first derivative Q′(x) has real roots. Then Q S(x) < 0.

Proof

Suppose that

$$\displaystyle \begin{aligned} Q'(x) = \prod_{i=1}^{n} (x-a_i) \qquad \text{with }a_i\text{ real} \, . \end{aligned} $$
(2.23)

Then

$$\displaystyle \begin{aligned} Q''(x) = \sum_{j=1}^{n} \frac{\prod_{i=1}^{n} (x-a_i)}{x-a_j} = \sum_{j=1}^{n} \frac{Q'(x)}{x-a_j}.\end{aligned} $$
(2.24)

Therefore, by (2.20),

$$\displaystyle \begin{aligned} Q^S(x) & = \frac{d}{dx} \left( \sum_{j=1}^{n} \frac{1}{x-a_j} \right) - \frac{1}{2} \left( \sum_{j=1}^{n} \frac{1}{x-a_j} \right)^2 \\ & = - \sum_{j=1}^{n} \frac{1}{(x-a_j)^2} - \frac{1}{2} \left( \sum_{j=1}^{n} \frac{1}{x-a_j} \right)^2 < 0 \, . \end{aligned} $$

7.2 Singer’s Theorem

As mentioned before , the Schwarz derivative preserves the sign under composition that is useful in the following theorem.

Theorem 2.10 (Singer [7])

Let f be a map from a closed interval I ⊆ [0, b] onto itself; then the dynamical system x n+1 = f(x n) has at most one periodic orbit in the interval I if the following conditions are met:

  1. 1.

    f is a function\(\mathcal {C}^3\);

  2. 2.

    There exists a critical point x c ∈ I such that

    $$\displaystyle \begin{aligned} \begin{cases} f'(x) > 0 \, , & \mathit{\text{for }}x<x_c , \\ f'(x) = 0 \, , & \mathit{\text{for }}x=x_c , \\ f'(x) < 0 \, , & \mathit{\text{for }}x>x_c . \end{cases} \end{aligned}$$
  3. 3.

    The origin is a repeller for f, that is

    $$\displaystyle \begin{aligned} f(0) = 0 , \qquad \mathit{\text{and}} \qquad \big|f'(0)\big|>1; \end{aligned}$$
  4. 4.

    The Schwarz derivative is

    $$\displaystyle \begin{aligned} f^S(x) \le 0 \qquad \mathit{\text{for all }}x \in I \setminus \{x_c\}. \end{aligned}$$

7.3 Sharkovsky’s Theorem

Let us introduce the following ordering on natural numbers.

Definition 2.33 (Sharkovsky Ordering [10])

(2.25)

That is, first all odd integers except one are listed, and then they are followed by 2 times that odd number, 22 times the odd, 23 times the odd, etc. This exhausts all the natural numbers with the exception of the powers of two that are listed last, in a decreasing order.

Theorem 2.11 (Sharkovsky)

Let I be an interval of\(\mathbb {R}\)and let f : I  I be a continuous function with a periodic point of prime period k. If kl in the Sharkovsky ordering of Def. 2.33 , then f also has a periodic point of period l.

For a proof see Devaney [5, p. 63] or [6].

We limit ourselves to showing the last part of the theorem, which is as follows.

Proposition 2.2

Let f be a continuous function with a periodic point of prime period 2n , for some n ≥ 1. Then f also has a periodic point of period 2n−1.

Proof

We consider at first the case n = 1; thus, we have to prove that if f has a 2-periodic point, then f has a fixed point.

Let a be a 2-periodic point of f, and consider b = f(a). If a = b, then a is a fixed point of f and we have finished. If ab, define

$$\displaystyle \begin{aligned} g(x) = f(x)-x \, . \end{aligned}$$

We have

$$\displaystyle \begin{aligned} g(a) & = f(a)-a = b-a \, , \\ g(b) & = f(b)-b = f\bigl(f(a)\bigr)-b = a-b \, . \end{aligned} $$

As g(a) and g(b) have opposite signs, then there exists at least one value c between a and b for which g(c) = 0, that is c is a fixed point of f.

Now we consider the case of a generic n. Let \(\varphi (x) = f^{\circ 2^{n-1}}(x)\), and as \(\varphi ^{\circ 2}(x) = f^{\circ 2^n}(x)\), the 2n periodic point of f is a 2-periodic point of φ. Hence φ has a fixed point, that is f has a 2n−1 periodic point.

As in the Sharkovsky ordering, the largest number is 3, and we obtain the following result.

Corollary 2.1 (Period Three Implies All Periods)

If f has a periodic orbit of period three, then it has periodic orbits of all periods.

As the set of the smallest numbers in the Sharkovsky ordering is the set of the powers of 2, the following corollary holds:

Corollary 2.2

If f has a periodic point of prime period k, with k not a power of two, then f has infinitely many periodic points. Conversely, if f has only finitely many periodic points, then they all necessarily have periods that are powers of two.

For multidimensional maps or for discontinuous maps, Sharkovsky’s Theorem is no longer valid, as shown by the following two examples.

Example 2.9

Consider the 2-dimensional map

$$\displaystyle \begin{aligned} \begin{cases} x_{n+1} = -\dfrac{1}{2}\,x_n -\dfrac{\sqrt{3\,}}{2}\,y_n \, , \\*[3mm] y_{n+1} = \dfrac{\sqrt{3\,}}{2}\,x_n-\dfrac{1}{2}\,y_n \, , \end{cases} \end{aligned}$$

which corresponds to a rotation of 120 about the origin.

Clearly the origin is a fixed point, whereas any other point has period 3: there are no orbits with period different from 3.

Example 2.10

Consider the function \(f \colon \left [0,1\right [ \to \left [0,1\right [\)

$$\displaystyle \begin{aligned} f(x) = \begin{cases} x+\dfrac{1}{3} & \text{if }x\in\left[0,\dfrac{2}{3}\right[ \, , \\*[3mm] x-\dfrac{2}{3} & \text{if }x\in\left[\dfrac{2}{3},1\right[ \, . \end{cases} \end{aligned}$$

It is easy to check that f ∘3(x) = x; hence, any point has period 3 and there are no orbits with period different from 3.