Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Definition of the Subject

The theory of dynamical systems studies the behavior of solutions of systems, like nonlinear ordinary differential equations (ODEs), depending upon parameters. Using qualitative methods of bifurcation theory, the behavior of the system is characterized for various parameter combinations. In particular, the catalog of system behaviors showing qualitative differences can be identified, together with the regions in parameter space where the different behaviors occur. Bifurcations delimit such regions. Symbolic and analytical approaches are in general infeasible, but numerical bifurcation analysis is a powerful tool that aids in the understanding of a nonlinear system. When computing power became widely available, algorithms for this type of analysis matured and the first codes were developed. With the development of suitable algorithms, the advancement in the qualitative theory has found its way into several software projects evolving over time. The availability of software packages allows scientists to study and adjust their models and to draw conclusions about their dynamics.

Introduction

Nonlinear ordinary differential equations depending upon parameters are ubiquitous in science. In this article methods for numerical bifurcation analysis are reviewed, an approach to investigate the dynamic behavior of nonlinear dynamical systems given by

$$ \dot{x}=f(x,p)\:,\quad x\in\mathbb{R}^{n}\:,\enskip p \in\mathbb{R}^{n_{p}}\:, $$
(1)

where \({f\colon\mathbb{R}^{n}\times\mathbb{R}^{n_{p}}\rightarrow\mathbb{R}^{n}}\) is generic and sufficiently smooth. In particular, x(t) represents the state of the system at time t and its components are called state (or phase) variables, \({\dot{x}(t)}\) denotes the time derivative of x(t), while p denotes the parameters of the system, representing experimental control settings or variable inputs.

In many instances, solutions of (1), starting at an initial condition \({x(0)}\), appear to converge as \({t\to\infty}\) to equilibria (steady states) or limit cycles (periodic orbits). Bounded solutions can also converge to more complex attractors, like tori (quasi‐periodic orbits) or strange attractors (chaotic orbits). The attractors of the system are invariant under time evolution, i. e., under the application of the time-t map Φt, where Φ denotes the flow induced by the system (1). Solutions attracting all nearby initial conditions, are said to be stable, and unstable if they repel some initial conditions.

Generally speaking, it is hard to obtain closed formulas for Φt as the system is nonlinear. In some cases, one can compute equilibria analytically, but this is often not the case for limit cycles. However, numerical simulations of (1) easily give an idea of how solutions look, although one never computes the true orbit due to numerical errors. One can verify stability conditions by linearizing the flow around equilibria and cycles. In particular, an equilibrium x 0 is stable if the eigenvalues of the linearization (Jacobi) matrix \({A=f_x(x_0,p)}\) (where the subscript denotes differentiation) all have a negative real part. Similarly, for a limit cycle \({x_0(t)}\) with period T, one defines the Floquet multipliers (or simply multipliers) as the eigenvalues of the monodromy matrix \({M=\Phi_x^{T}(x_0(0))}\). The cycle is stable if the nontrivial multipliers, there is always one equal to 1, are all within the unit circle. Equilibria and limit cycles are called hyperbolic if the eigenvalues and nontrivial multipliers do not have a zero real part or modulus one, respectively.

For any given parameter combination p, the state space representation of all orbits constitutes the phase portrait of the system. In practice, one draws a set of strategic orbits (or finite segments of them), from which all other orbits can be intuitively inferred, as illustrated in Fig. 1.

Figure 1
figure 1

Phase portrait of a two‐dimensional system with two attractors (the equilibrium X 2 and the limit cycle Γ), a repellor (X 0), and a saddle (X 1)

Points \({X_{0},X_{1},X_{2}}\) are equilibria, of which X 0 and X 1 are unstable and X 2 is stable. In particular, X 0 is a repellor, i. e., nearby orbits do not tend to remain close to X 0, while X 1 is a saddle, i. e., almost all nearby orbits go away from X 1 except two, which tend to X 1 and lie on the so‐called stable manifold; the two orbits emanating from X 1 compose the unstable manifold. There are therefore two attractors, the equilibrium X 2 and the limit cycle Γ, whose basins of attraction consist of the initial conditions in the shaded and white areas, respectively. Note that while attractors and repellors can be easily obtained through simulation, forward and backward in time, saddles can be hard to find.

The analysis of system (1) becomes even more difficult if one wants to follow the phase portrait under variation of parameters. Generically, by perturbing a parameter slightly the phase portrait changes slightly as well. Namely, if the new phase portrait is topologically equivalent to the original one, then nothing changed from a qualitative point of view, i. e., all attracting, repelling, and saddle sets are still present with unchanged stability properties, though slightly perturbed. By contrast, the critical points in parameter space where arbitrarily small parameter perturbations give rise to nonequivalent phase portraits are called bifurcation points, where bifurcations are said to occur. Bifurcations therefore result in a partition of parameter space into regions: parameter combinations in the same region correspond to topologically equivalent dynamics, while nonequivalent phase portraits arise for parameter combinations in neighboring regions. Most often, this partition is represented by means of a two‐dimensional bifurcation diagram , where the regions of a parameter plane are separated by so‐called bifurcation curves. Bifurcations are said to be local if they occur in an arbitrarily small neighborhood of the equilibrium or cycle; otherwise, they are said to be global.

Although one might hope to detect bifurcations by simulating system (1) for various parameter combinations and initial conditions, a “brute force” simulation approach is hardly effective and accurate in practice, because bifurcations of equilibria and cycles are associated with a loss of hyperbolicity, e. g., stability, so that one should dramatically increase the length of simulations while approaching the bifurcation. In particular, saddle sets are hard to find by simulation, but play a fundamental role in bifurcation analysis, since they, together with attracting and repelling sets, form the skeleton of the phase portrait. This is why numerical bifurcation analysis does not rely on simulation, but rather on continuation, a numerical method suited for computing (approximating through a discrete sequence of points) one‐dimensional manifolds (curves, “branches” in regular) implicitly defined as the zero set of a suitable defining function .

The general idea is to formulate the computation of equilibria and their bifurcations as a suitable algebraic problem (AP) of the form

$$ F(u,p)=0\:, $$
(2)

where \({u\in\mathbb{R}^{n_u}}\) is composed of x and possibly other variables characterizing the system, see, e. g., defining functions as in Sect. “Continuation and Detection of Bifurcations”. Here, however, for simplicity of notation, u will be considered as in \({\mathbb{R}^n}\), but the actual dimension of u will always be clear from the context. Similarly, limit cycles and their bifurcations are formulated in the form of a boundary‐value problem (BVP)

$$ \left\{\arraycolsep0.17em\begin{array}{rl} \dot{u}-f(u,p) &=0\:,\\ g(u(0),u(T),p) &=0\:,\\ \int_{0}^{T} h(u(t),p)\mathrm{d}t &=0\:, \end{array}\right. $$
(3)

with n b boundary conditions, i. e., \( g\colon\mathbb{R}^{n}\times\mathbb{R}^{n}\times{\mathbb R}^{n_p}\rightarrow\mathbb{R}^{n_{b}}\), n i integral conditions, i. e., \({h\colon{\mathbb R}^{n}\times\mathbb{R}^{n_p}\rightarrow\mathbb{R}^{n_{i}}}\), and u in a proper function space. In other words, a list of defining functions is formulated, in the form (2) or (3), to cover all cases of interest. For example, \({u=x}\) and \({F(x,p)=f(x,p)}\) is the AP defining equilibria of (1). The commonly used cycle BVP, with the time‐rescaling \({t=T\tau}\), is

$$ \left\{\arraycolsep0.17em\begin{array}{rl} \dot{x}-Tf(x,p) & = 0\:,\\ x(0)-x(1) & = 0\:,\\ \int_0^1x(\tau)^{\top}\dot{x}^{k-1}(\tau)\mathrm{d}\tau & = 0\:, \end{array}\right. $$
(4)

where from here on \({^{\top}}\) denotes the transpose and, for simplicity, \({\dot{}}\) for \({\mathrm{d}/\mathrm{d}\tau}\) is used. The integral condition is the so‐called phase condition , which ensures that \({x(\tau)}\) is the 1 periodic solution of (1) closest to the reference solution \({x^{k-1}(\tau)}\) (typically known from the previous point along the continuation), among time‐shifted orbits \({x(\tau-\tau_0), \tau_0\in[0,\,1]}\).

As will be discussed in Sect. “Discretization of BVPs”, a proper time‐discretization of \({u(\tau)}\) allows one to approximate any BVP by a suitable AP. Thus, equilibria, limit cycles and their bifurcations can all be represented by an algebraic defining function like (2), and numerical continuation allows one to produce one‐dimensional solution branches of (2) under the variation of strategic components of p, called free parameters. With this approach, equilibria and cycles can be followed without further difficulty in parameter regimes where these are unstable. Then, during continuation, the stability of equilibria and cycles is determined through linearization. Moreover, the characterization of nearby solutions of (1) can be done using normal forms, i. e., the simplest canonical models to which the system, close to a bifurcation, can be reduced on a lower‐dimensional manifold of the state space, the so‐called center manifold. While a branch of equilibria or cycles is followed, bifurcations can be detected as the zero of suitable test functions. Upon detection of a bifurcation, the defining function can be augmented by this test function or another appropriate function, and the new defining function can then be continued using one more free parameter.

An analytical bifurcation study is feasible for simple systems only. Numerical bifurcation analysis is one of the few but also very powerful tools to understand and describe the dynamics of systems depending on parameters. Some basic steps while performing bifurcation analysis will be outlined and software implementations of continuation and bifurcation algorithms discussed.

First a few standard and often used approaches for the computation and continuation of zeros of a defining function are reviewed in Sect. “Continuation and Discretization of Solutions”. The presentation starts with the most obvious, but also naive, approaches to contrast these with the methods employed by software packages. In Sect. “Normal Forms and the Center Manifold” several possible scenarios for the loss of stability of equilibria and limit cycles are discussed. Not all bifurcations are characterized by linearization and for the detection and analysis of these bifurcations, codimension 1 normal forms are mentioned and a general method for their computation on a center manifold is presented. Then, a list of suitable test functions and defining systems for the computation of bifurcation branches is discussed in Sect. “Continuation and Detection of Bifurcations”. In particular, when a system bifurcates new solution branches appear. Techniques to switch to such new branches are described in Sect. “Branch Switching”. Finally, the computation and continuation of global bifurcations characterized by orbits connecting equilibria is presented, in particular homoclinic orbits, in Sect. “Connecting Orbits”. This review concludes with an overview of existing implementations of the described algorithms in Sect. “Software Environments”. Previous reviews [7,9,22,43] have similar contents. This review however, focuses more on the principles now underlying the most frequently used software packages for bifurcation analysis and the algorithms being used.

Continuation and Discretization of Solutions

The continuation of a solution u of (2) with respect to one parameter p is a fundamental application of the Implicit Function Theorem (IFT).

Generally speaking, to define one‐dimensional solution manifolds (branches), the number of unknowns in (2) should be one more than the number of equations, i. e., \({n_p=1}\). However, during continuation it is better not to distinguish between state variables and parameters as will become apparent in Sects. “Pseudo-Arclength Continuation” and “Moore–Penrose Continuation”. Therefore, write \({y=(u,p)\in Y=\mathbb{R}^{n+1}}\) for the continuation variables in the continuation space Y and consider the continuation problem

$$ F(y)=0\:, $$
(5)

with \({F\colon\mathbb{R}^{n+1}\rightarrow\mathbb{R}^{n}}\).

Let F be at least continuous differentiable, \({y_0=(u_0,p_0)}\) be a known solution point of (5), and the matrix \({F_y(y_0)=[F_u(u_0,p_0)|F_p(u_0,p_0)]}\) be full rank, i. e.,

$$ \begin{aligned} & \operatorname{rank}(F_y(y_0))= \\ & \quad\quad n \Leftrightarrow \left\{\arraycolsep0.17em \begin{array}{ll} (\text{i}) & \operatorname{rank}(F_{u}(u_0,p_0))=n\:,\: \text{or}\\ (\text{ii}) & \operatorname{rank}(F_{u}(u_0,p_0))=n-1\: \text{and} \\[-0.5mm] &F_{p}(u_0,p_0)\notin \mathcal{R}(F_{u}(u_0,p_0))\:, \end{array}\right. \end{aligned} $$
(6)

where \({\mathcal{R}(F_{u})}\) denotes the range of F u . Then the IFT states that there exists a unique solution branch of (5) locally to y 0. Introducing a scalar coordinate s parametrizing the branch, e. g., the arclength positively measured from y 0 in one of the two directions along the solution branch, then one can represent the branch by \({y(s)=(u(s),p(s))}\) and the IFT guarantees that \({F(y(s))=0}\) for \({|s|}\) is sufficiently small. Moreover, y(s) is continuous differentiable and the vector \({\phi(s)=y_s(s)=(u_s(s),p_s(s))=(v(s),q(s))}\), tangent to the solution branch at y(s), exists and is the unique solution of

$$ \left\{\arraycolsep0.17em\begin{array}{rl} F_y(y(s))\phi(s)=F_u(u(s)\:,\; p(s))v(s)+F_p(u(s)\:,\\[-0.5mm] p(s))q(s) & = 0\:,\\ \phi(s)^{\top}\phi(s)=v(s)^{\top} v(s)+q(s)^{2} & = 1\:. \end{array}\right. $$
(7)

In other words, the matrix \({F_y(y_0)}\) has a one‐dimensional nullspace \({\mathcal{N}(F_y(y_0))}\) spanned by \({\phi(0)}\) and y 0 is said to be a regular point of the continuation space Y.

Below, several variants of numerical continuation will be described. The aim is to produce a sequence of points \({y_k, k\geq 0}\) that approximate the solution branch y(s) in one direction. Starting from y 0, the general idea is to make a suitable prediction \({y_1^0}\), typically along the tangent vector, from which the Newton method is applied to find the new point y 1. The predictor‐corrector procedure is then iterated. First, the simplest implementation is presented and it is shown where it might fail. Many continuation packages for bifurcation theory use an alternative implementation of which two variants are discussed. Many more advanced predictor‐corrector schemes have been designed, see [1,18,46] and references therein.

Parameter Continuation

Parameter continuation assumes that the solution branch of (5) can be parameterized by the parameter \({p\in\mathbb{R}}\). Indeed, if F u has full rank, i. e., case (i) in (6), then this is possible by the IFT. Starting from \({(u_0,p_0)}\) and perturbing the parameter a little, with a stepsize h, the new parameter is \({p_1=p_0+h}\) and the most simple predictor for the state variable is given by \({u_1^0=u_0}\).

Application of Newton's method to find u 1 satisfying (5) leads to

$$ u_1^{j+1}=u_1^j-F_u(u_1^j,p_1)^{-1}F(u_1^j,p_1)\:,\quad j=0,1,2,\dotsc $$

The iterations are stopped when a certain accuracy is achieved, i. e., \({\|\Delta u\|=\|u_1^{j+1}-u_1^j\|<\varepsilon_u}\) and/or \( \|F(u_1^j,p_1)\|<\varepsilon_F \). In practice, also the maximum number of Newton steps is bounded, in order to guarantee termination. If this maximum is reached before convergence, the computation is restarted with a smaller (typically halved) stepsize. However, in case of quick convergence, e. g., after only a few iterations, the stepsize is multiplied (1.3 is a typical factor). In any case, the stepsize is varied between two assigned limits \({h_{\mathrm{min}}}\) and \({h_{\mathrm{max}}}\), so that continuation cannot proceed when convergence is not reached even with minimum stepsize. When h is chosen too small, too much computational work may be performed, while for h that is chosen too large, little detail of the solution branch is obtained.

As a first improved predictor, note that the IFT suggests to use the tangent prediction for the state variables \({u_1^0=u_0+hv_0}\), where v 0 is obtained from (7) with \({s=0}\) and as \({v(0)/q(0)}\). Indeed, the tangent vector can be approximated by the difference \({v_k=(u_k-u_{k-1})/h_k}\) or, even better, computed at negligible cost, since the numerical decomposition of the matrix \({F_u(u_k,p_k)}\) is known from the last Newton iteration.

These methods are illustrated in Fig. 2. Note in particular the folds in the sketch. Here parameter continuation does not work, since exactly at the fold \({F_{u}(u,p)}\) is singular such that Newton's method does not converge and beyond, for larger p, there is no local solution to (5).

Figure 2
figure 2

Parameter continuation without (a) and with (b) tangent prediction. The dotted lines indicate subspaces where solutions are searched

Pseudo-Arclength Continuation

Near folds, the solution branch is not well parameterized by the parameter, but one can use a state variable for the parametrization. In fact, the fold is a regular point (case (ii) in (6)) at which the tangent vector \({\phi=(v,q)}\) has no parameter component, i. e., \({q=0}\). So, without distinguishing between parameters and state variables, one takes the tangent prediction \({y_1^0=y_0+h\phi_0}\), as long as the starting solution y 0 is a regular point. Since now both p and u are corrected, one more constraint is needed. Pseudo‐arclength continuation uses the stepsize h as an approximation of the required distance, in arclength, between y 0 and the next point y 1. This leads to the so‐called pseudo‐arclength equation \({\phi_{0}^{\top}(y_{1}-y_{0})=h}\). In this way, solution branches can be followed past folds. The idea for this continuation method is due to Keller [50].

The Newton iteration, applied to

$$ \left\{\arraycolsep0.17em\begin{array}{rl} F(y_1)& = 0\:,\\ \phi_0^{\top}(y_1-y_0)-h & = 0\:, \end{array}\right. $$

is given by

$$ \begin{aligned} & y_1^{j+1}=y_1^{j}-\left( \begin{matrix} {c}F_y(y_1^{j})\\ \phi_0^{\top} \end{matrix}\right)^{-1} \left( \begin{matrix} {c}F(y_1^{j})\\ 0 \end{matrix} \right)\:,\\ & j=0,1,2,\dotsc\:, \end{aligned} $$
(8)

where \({\Delta y=y_1^{j+1}-y_1^{j}}\) is forced to lie in the hyperplane orthogonal to the tangent vector, as illustrated in Fig. 3a. Upon convergence, the new tangent vector ϕ1 is obtained by solving (7) at y 1.

Figure 3
figure 3

Pseudo‐arclength (a) and Moore–Penrose (b) continuation. Searching a solution in hyperplanes without (a) and with (b) updating the tangent vector. The open dots correspond to Newton iterations, full dots to points on the curve. The dotted lines indicate subspaces where solutions are searched

Moore–Penrose Continuation

This continuation method is based on optimization. Starting with the tangent prediction \({y_{1}^0=y_0+h\phi_0}\), a point y 1 with \({F(y_{1})=0}\) nearest to \({y_{1}^{0}}\) is searched, so the following is optimized

$$ \min_{y_{1}}\{ \| y_{1}-y^{0}_{1}\| | F(y_{1})=0\}\:. $$

Each correction is therefore required to be orthogonal to the nullspace of \({F_y(y_1^{j})}\), i. e.,

$$ \left\{\arraycolsep0.17em\begin{array}{rl} F(y_1^{j+1})& = 0\:,\\ (\phi_1^{j})^{\top}(y_1^{j+1}-y^{j}_1)& = 0\:. \end{array}\right. $$

Starting with \({\phi_{1}^{0}=\phi_0}\) the Newton iterations are given by

$$ \left\{\arraycolsep0.17em\begin{array}{rl} y_1^{j+1} & = y_1^{j}- \left(\begin{matrix} F_y(y_1^{j})\\ (\phi_1^j)^{\top}\end{matrix}\right)^{-1} \left(\begin{matrix} F(y_1^{j})\\ 0\end{matrix}\right)\:,\\[3ex] \phi_1^{j+1} & = \left(\begin{matrix}F_y(y_1^{j+1})\\ (\phi_1^j)^{\top}\end{matrix}\right)^{-1} \left(\begin{matrix} 0\\ 1\end{matrix}\right), \quad j=0,1,2,\dotsc \end{array}\right. $$
(9)

As illustrated in Fig. 3b, the Moore–Penrose continuation can be interpreted as a variant of Keller's method in which the tangent vector is updated at every Newton step. When the new point y 1 is found, the tangent vector ϕ1 is immediately obtained as \({\phi_1^{j+1}/\|\phi_1^{j+1}\|}\) from the last Newton iteration, since \({\phi_1^j}\) does not necessarily have unit length.

Finally, for both the pseudo-arclength and Moore-Penrose continuation methods one can prove that they converge (with superlinear convergence), provided that y 0 is a regular point and the stepsize is sufficiently small.

Discretization of BVPs

In this section, orthogonal collocation  [3,15] is described, a discretization technique to approximate the solution of a generic BVP (3) by a suitable AP. Let u be at least in the space \({\mathcal{C}^{1}([0,1],\mathbb{R}^{n})}\) of continuous differentiable vector‐valued functions defined on \({[0,\,1]}\). For BVPs the rescaled time \({t=T\tau}\) is used, so that the period T becomes a parameter and in the sequel T will be addressed as such. Introduce a time mesh \({0=\tau_{0}<\tau_{1}<\dotsc<\tau_{N}=1}\) and, on each interval \({[\tau_{j-1},\tau_{j}]}\), approximate the function u by a vector‐valued polynomial \({\wp_{j}}\) of degree m, \({j=1,\dotsc,N}\). The polynomials \({\wp_{j}}\) are determined by imposing the ODE in (3) at m collocation points \({z_{j,i}}\), \({i=1,\dotsc,m}\), i. e.,

$$ \dot{\wp}_{j}(z_{j,i}) = f(\wp_{j}(z_{j,i}),p)\:,\\ j=1,\dotsc,N\:,\quad i=1,\dotsc,m\:. $$
(10)

One usually chooses the so‐called Gauss points as the collocation points, the roots of the mth order Legendre polynomials. Moreover, \({\wp_1(0)}\) and \({\wp_N(1)}\) must satisfy the boundary conditions and the whole piecewise polynomial must satisfy the integral conditions.

Counting the number of unknowns, the discretization of (3) leads to nN polynomials, each with \({(m+1)}\) degrees of freedom, plus n p free parameters, so there are \( nN(m+1)+n_p \) continuation variables. These variables are matched by nmN collocation equations from (10), \( n(N-1) \) continuity conditions at the mesh points, n b boundary conditions, and n i integral conditions, for a total of \( nN(m+1)+n_b+n_i-n \) algebraic equations. Thus, in order for these equations to compose an AP, the number of free parameters is generically \( n_p=n_b+n_i-n+1 \) and, typically, one is the period T.

The collocation method yields high accuracy with superconvergence at the mesh points [15]. The mesh can also be adapted during continuation, for instance to minimize the local discretization error [68]. The equations can be solved efficiently by exploiting the particular sparsity structure of the Jacobi matrix in the Newton iteration. In particular, a few full but essentially smaller systems are solved instead of one sparse but large system. During this process one finds two nonsingular \({(n\times n)}\) -submatrices M 0 and M 1 such that \({M_{0}u(0)+M_{1}u(1)=0}\), i. e., the monodromy matrix \({M=-M_{1}^{-1}M_{0}}\) is found as a by‐product. In the case of a periodic BVP (\({u(1)=u(0)}\)) the Floquet multipliers are therefore computed at low computational cost.

Normal Forms and the Center Manifold

Local bifurcation analysis relies on the reduction of the dynamics of system (1) to a lower-dimensional center manifold H 0 near nonhyperbolic equilibria or limit cycles, i. e., when they bifurcate at some critical parameter p 0. The existence of H 0 follows from the Center Manifold Theorem (CMT), see, e. g., [11], while the reduction principle is shown in [72]. The reduced ODE has the same dimension as H 0 given by the number n c of critical eigenvalues or nontrivial multipliers (counting multiplicity), and is transformed to a normal form. The power of this approach is that the bifurcation scenario of the normal form is preserved in the original system. The normal form for a specific bifurcation is usually studied only up to a finite order, i. e., truncated, and many diagrams for bifurcations with higher codimension are in principle incomplete due to global phenomena, such as connecting orbits. Also, H 0 is not necessarily unique or smooth [75], but fortunately, one can still draw some useful qualitative conclusions.

The codimension (codim) of a bifurcation is the minimal number of parameters needed to encounter the bifurcation and to unfold the corresponding normal form generically. Therefore, in practice, one finds codim 1 phenomena when varying a single parameter, and continues them as curves in two‐parameter planes. Codim 2 phenomena are found as isolated points along codim 1 bifurcation curves. Still, codim 2 bifurcations are important as they are the roots of codim 1 bifurcations, in particular of global phenomena. For this reason they are called organizing centers as, around these points in parameter space, one‐parameter bifurcation scenarios change. For parameter‐dependent systems the center manifold H 0 can be extended to a parameter‐dependent invariant manifold \({H(p),H_{0}=H(p_{0})}\), so that the bifurcation scenario on H(p) is preserved in the original system for \({\|p-p_{0}\|}\) sufficiently small.

In the following, the normal forms for all codim 1 bifurcations of equilibria and limit cycles are presented and their bifurcation scenarios discussed. Then a general computational method for the approximation, up to a finite order, of the parameter‐dependent center manifold H(p) is presented. The method gives, as a by‐product, explicit formulas for the coefficients of a given normal form in terms of the vector field f of system (1).

Normal Forms

Bifurcations can be defined by certain algebraic conditions. For instance, an equilibrium is nonhyperbolic if \({\Re(\lambda)=0}\) holds for some eigenvalue. The simplest possibilities are \({\lambda=0}\) (limit point bifurcation or branch point, though the latter is nongeneric, see Sect. “Branch Switching”) and \({\lambda_{1,2}=\pm i\omega_0, \omega_0 > 0}\) (Hopf bifurcation). Bifurcations of limit cycles appear if some of the nontrivial multipliers cross the unit circle. The three simplest possibilities are \({\mu=1}\) (limit point of cycles), \({\mu=-1}\) (period‐doubling) or \({\mu_{1,2}=\text{e}^{\pm i\theta_{0}},0 < \theta_0 < \pi}\) (Neimark–Sacker).

At the bifurcation (\({p=p_0}\)), linearization of system (1) near the equilibrium x 0 or around a limit cycle, does not result in any stability information in the center manifold. In this case, nonlinear terms are also necessary to obtain such knowledge. This is provided by the critical normal form coefficients as discussed below. The state variable in the normal form will be denoted by w and the unfolding parameter by \({\alpha\in\mathbb{R}}\), with \({w=0}\) at \({\alpha=0}\) being a nonhyperbolic equilibrium. Bifurcations are labeled in accordance with the scheme of [38].

Codimension 1 Bifurcations of Equilibria

Limit point bifurcation (LP): The equilibrium has a simple eigenvalue \({\lambda=0}\) and the restriction of (1) to a one‐dimensional center manifold can be transformed to the normal form

$$ \dot{w} = \alpha + a_{\texttt{LP}}w^{2} +O(|w|^{3})\:,\quad w\in\mathbb{R}\:, $$
(11)

where generically \({a_{\texttt{LP}}\neq 0}\) and O denotes higher order terms in state variables depending on parameters too. When the unfolding parameter α crosses the critical value (\({\alpha=0}\)), two equilibria, one stable and one unstable in the center manifold, collide and disappear. This bifurcation is also called saddle‐node, fold or tangent bifurcation. Note that this bifurcation occurs at the folds in Figs. 2 and 3.

Hopf bifurcation (H): The equilibrium has a complex pair of eigenvalues \({\lambda_{1}=-\lambda_{2}=i\omega_{0}}\) and the restriction of (1) to the two‐dimensional center manifold is given by

$$ \dot{w} = (i\omega_{0}+\alpha)w + c_{\texttt{H}}w^{2}\bar{w}+O(|w|^{4})\:,\quad w\in\mathbb{C}\:, $$
(12)

where generically the first Lyapunov coefficient \( d_{\texttt{H}}=\Re(c_{\texttt{H}})\neq 0 \). When α crosses the critical value, a limit cycle is born. It is stable (and present for \( \alpha> 0 \)) if \( d_{\texttt{H}}<0 \) and unstable if \( d_{\texttt{H}}> 0 \) (and present for \( \alpha < 0 \)). The case \( d_{\texttt{H}}<0 \) is called supercritical or “soft”, while \( d_{\texttt{H}}> 0 \) is called subcritical or “hard” as there is no (local) attractor left after the bifurcation. This bifurcation is most often called Hopf, but also Poincaré–Andronov–Hopf as this was also known to the first two.

Codimension 1 Bifurcations of Limit Cycles

Bifurcations of limit cycles are theoretically very well understood using the notion of a Poincaré map . To define this map, choose a \({(n-1)}\)‐dimensional smooth cross‐section Σ transversal to the cycle and introduce a local coordinate \({z\in\mathbb{R}^{n-1}}\) such that \({z=Z(x)}\) is defined on Σ and invertible. For example, one chooses a coordinate plane \({x_{j}=0}\) such that \({f_{j}(x)|_{x_{j}=0}\neq 0}\). Let \({x_{0}(t)}\) be the cycle with period T, so that \({z_0=Z(x_{0}(0))}\) is the cycle intersection with Σ, where \({z=0}\) can always be assumed without loss of generality. Denote by T(z) the return time to Σ defined by the flow Φ with \({T(z_0)=T}\). Now, the Poincaré map \({P\colon{\mathbb R}^{n-1}\rightarrow\mathbb{R}^{n-1}}\) maps each point close enough to \({z=0}\) to the next return point on Σ, i. e., \({P\colon z\mapsto Z(\Phi^{T(z)}(Z^{-1}(z)))}\). Thus, bifurcations of limit cycles turn into bifurcations of fixed points of the Poincaré map which can be easily described using local bifurcation theory. Moreover, it can be shown that the \({n-1}\) eigenvalues of the linearization \({P_z(0)}\) are the nontrivial eigenvalues of the monodromy matrix \({M=\Phi_x^{T}(x_{0}(0))}\), which also has a trivial eigenvalue equal to 1 (the vector \({f(x_0(0),p)}\), tangent to the cycle at \({x_{0}(0)}\), is mapped by M to itself). The eigenvalues of \({P_z(0)}\) are therefore the nontrivial multipliers of the cycle. Although the Poincaré map and its linearization can also be computed numerically through suitably organized simulations (so‐called shooting techniques [17]), it is better to handle both the cycle multipliers and normal form computations associated to nonhyperbolic cycles using BVPs [9,23,57]. Here, however, the normal forms on a Poincaré section are presented, where \({w=0}\) at \({\alpha=0}\) is the fixed point of the Poincaré map corresponding to a nonhyperbolic limit cycle.

Limit point of cycles (LPC): The fixed point has one simple nontrivial multiplier \({\mu=1}\) on the unit circle and the restriction of P to a one‐dimensional center manifold has the form

$$ w \mapsto \alpha + w + a_{\texttt{LPC}} w^2 + O(w^3)\:,\quad w\in\mathbb{R}\:, $$

where \({a_{\texttt{LPC}} \neq 0}\). As for the LP bifurcation two fixed points collide and disappear when α crosses the critical value, provided \({a_{\texttt{LPC}} \neq 0}\). This implies the collision of two limit cycles of the original vector field f.

Period-doubling (PD): The fixed point has one simple multiplier \({\mu=-1}\) on the unit circle and the restriction of P to a one‐dimensional center manifold can be transformed to the normal form

$$ w \mapsto -(1+\alpha)w + b_{\texttt{PD}} w^3 + O(w^4)\:,\quad w\in\mathbb{R}\:, $$

where \({b_{\texttt{PD}} \neq 0}\). When the parameter α crosses the critical value and \({b_{\texttt{PD}} \neq 0}\), a cycle of period 2 for P bifurcates from the fixed point corresponding to a limit cycle of period 2T for the original system (1). This phenomenon is also called the flip bifurcation. If \({b_{\texttt{PD}}}\) is positive [negative], the bifurcation is supercritical [subcritical] and the double period cycle is stable [unstable] (and present for \({\alpha> 0 [\alpha<0}\)]).

Neimark–Sacker (NS): The fixed point has simple critical multipliers \({\mu_{1,2}=\text{e}^{\pm i \theta_0}}\) and no other multipliers on the unit circle. Assume that \({\text{e}^{i k\theta_0} \neq 1}\) for \({k=1,2,3,4}\), i. e., there are no strong resonances. Then, the restriction of P to a two‐dimensional center manifold can be transformed to the normal form

$$ w \mapsto \text{e}^{i \theta(\alpha)}(1+\alpha)w + c_{\texttt{NS}}w^2\bar{w} + O(|w|^4)\:,\quad w\in\mathbb{C}\:, $$

where \({c_{\texttt{NS}}}\) is a complex number and \({\theta(0)=\theta_0}\). Provided \({d_{\texttt{NS}}=\Re(\text{e}^{-i \theta_0}c_{\texttt{NS}}) \neq 0}\), a unique closed invariant curve for P appears around the fixed point, when α crosses the critical value. In the original vector field, this corresponds to the appearance of a two-dimensional torus with (quasi-) periodic motion. This bifurcation is also called secondary Hopf or torus bifurcation. If \({d_{\texttt{NS}}}\) is negative [positive], the bifurcation is supercritical [subcritical] and the invariant curve (torus) is stable [unstable] (and present for \({\alpha> 0 [\alpha<0}\)]).

Center Manifolds

Generally speaking, the CMT allows one to restrict the dynamics of (1) to a suspended system

$$ \dot{w} = G(w,\alpha), \: \: G\colon\mathbb{R}^{n_c}\times\mathbb{R}^{n_p}\to\mathbb{R}^{n_c}\:, $$
(13)

on the center manifold H and here n p is typically 1 or 2 depending on the codimension of the bifurcation. Although the normal forms (13) to which one can restrict the system near nonhyperbolic equilibria and cycles are known, these results are not directly applicable. Thus, efficient numerical algorithms are needed in order to verify the nondegeneracy conditions in the normal forms listed above.

Here, a powerful normalization method due to Iooss and coworkers is reviewed, see [9,14,29,37,55,59]. This method assumes very little a priori information, actually only the type of bifurcation such that the form, i. e., the nonzero coefficients of G, is known. This fits very well in a numerical bifurcation setting where one computes families of solutions and monitors and detects the occurrence of bifurcations with higher codimension during the continuation.

Table 1 Critical normal form coefficients for generic codim 1 bifurcations of equilibria and fixed points. Here, A, B and C refer to the expansion (14) for equilibria, while for fixed points they refer to (19)

Without loss of generality it is assumed that \({x_0=0}\) at the bifurcation point \({p_0=0}\). Expand \({f(x,p)}\) in Taylor series

$$ \begin{aligned} f(x,p) &= Ax + \tfrac{1}{2}B(x,x) +\tfrac{1}{6}C(x,x,x)+ J_{1}p \\ &+ A_{1}(x,p)+ \dotsc\:, \end{aligned}$$
(14)

parametrize, locally to \({(x,p)=(0,0)}\), the parameter‐dependent center manifold by

$$ x=H(w,\alpha)\:,\quad H\colon\mathbb{R}^{n_{c}}\times\mathbb{R}^{n_{p}}\rightarrow\mathbb{R}^{n}\:, $$
(15)

and define a relation \({p=V(\alpha)}\) between the original and unfolding parameters. The invariance of the center manifold can be exploited by differentiating this parametrization with respect to time to obtain the so‐called homological equation

$$ f(H(w,\alpha),V(\alpha)) = H_{w}(w,\alpha) G(w,\alpha)\:. $$
(16)

To verify nondegeneracy conditions, only an approximation to the solution of the homological equation is required. To this end, \({G,H}\) and V are expanded in Taylor series:

$$ \begin{aligned} G(w,\alpha) &= \sum_{|\mu|+|\nu|\geq 1}\frac{1}{\mu!\nu!}g_{\mu\nu}w^{\mu}\alpha^{\nu}\:,\\ H(w,\alpha) &= \sum_{|\mu|+|\nu|\geq 1}\frac{1}{\mu!\nu!}h_{\mu\nu}w^{\mu}\alpha^{\nu}\:,\\ V(\alpha) &= v_{10}\alpha_{1} + v_{01}\alpha_{2} +O(\|\alpha\|^2)\:, \end{aligned}$$
(17)

where \({g_{\mu\nu}}\) are the desired normal form coefficients and \({\mu,\nu}\) are multi‐indices. For a multi-index μ one has \({\mu=(\mu_{1},\mu_{2},\dotsc,\mu_{n})}\) for nonnegative integers μ i , \( \mu! = \mu_{1}!\mu_{2}!\dotsc\mu_{n}!, |\mu|=\mu_{1}+\mu_{2}+\dotsc+\mu_{n}\), \( \tilde{\mu}\leq \mu \) if \( \tilde{\mu_{i}} \leq \mu_{i}\) for all \( i=1,\dotsc,n \) and \( w^{\mu}=w_1^{\mu_1}\dotsc w_{n}^{\mu_n}\). When dealing with just the critical coefficients, i. e., \({\alpha=0}\), the index ν is omitted. Substitution of this ansatz into (16) gives a formal power series in w and α. As both sides should be equal for all w and α, the coefficients of the corresponding powers should be equal. For each vector \({h_{\mu\nu}}\), (16) gives linear systems of the form

$$ L_{\mu\nu}h_{\mu\nu} = R_{\mu\nu}\:, $$
(18)

where \({L_{\mu\nu}=A-\gamma_{\mu\nu}I_{n}}\) (\({\gamma_{\mu\nu}}\) is a weighted sum of the critical eigenvalues) and \({R_{\mu\nu}}\) involves known quantities of G and H of order less than or equal to \({|\mu|+|\nu|}\). This leads to an iterative procedure, where, either system (18) is nonsingular, or the required coefficients \({g_{\mu\nu}}\) are obtained by imposing solvability, i. e., \({R_{\mu\nu}}\) lies in the range of \({L_{\mu\nu}}\) and is therefore orthogonal to the eigenvectors of \({L_{\mu\nu}^{\top}}\) associated to the zero eigenvalue. In the second case, the solution of (18) is not unique, and one typically selects the \({h_{\mu\nu}}\) without components in the nullspace of \({L_{\mu\nu}}\). However, the nonuniqueness of the center manifold does not affect qualitative conclusions. The parameter transformation, i. e., the v ν, is obtained by imposing certain conditions on some normal form coefficients, leading to a solvable system.

One can perform an analogous procedure for the Poincaré map by using a Taylor expansion

$$ P(z,p)=Az+\tfrac{1}{2}B(z,z) +\tfrac{1}{6}C(z,z,z)+\dotsc $$
(19)

and the homological equation for maps

$$ P(H(w,\alpha),V(\alpha))=H(G(w,\alpha),\alpha)\:. $$
(20)

The detailed derivation of the formulas for all codim 1 and 2 cases for equilibria and cycles can be found in [9,55,56,59,60]. The formulas for the critical normal form coefficients for codim 1 bifurcations are presented in Table 1. Note once more that for limit cycles a numerical more appropriate method exists [57] based on periodic normal forms [47,48].

Continuation and Detection of Bifurcations

Along a solution branch one generically passes through bifurcation points of higher codimension. To detect such an event, a test function φ is defined, where the event corresponds to a regular zero. If at two consecutive points \({y_{k-1},y_{k}}\) along the branch the test function changes sign, i. e., \({\varphi(y_{k})\varphi(y_{k-1})<0}\), then the zero can be located more precisely. Usually, a one‐dimensional secant method is used to find such a point. Now, if system (1) has a bifurcation at \({y_{0}=(x_{0},p_{0})}\), then there is generically a curve \({y=y(s)}\) where the system displays this bifurcation. In order to find this curve, one starts with a known point y 0 and formulates a defining system and then continue that solution in one extra free parameter.

Test Functions for Codimension 1 Bifurcations

An equilibrium may lose stability through a limit point, a Hopf bifurcation or in a branch point. At a limit or branch point bifurcation the Jacobi matrix \( A=f_{x}(x_{0},p_{0}) \) has an algebraically simple eigenvalue \( \lambda = 0 \) (see Sect. “Branch Switching” for branch points), while at a Hopf point there is a pair of complex conjugate eigenvalues \( \lambda = \pm i\omega_0, \omega_0\neq 0 \) and only one such pair.

The simplest way of detecting the passage through a bifurcation during continuation, is to monitor the eigenvalues of the Jacobi matrix. For large systems and stiff problems this is prohibitive as it is numerically expensive and not always accurate. Instead, one can base test functions on determinants.

Test Functions for Limit Point Bifurcations

Along an equilibrium curve the product of the eigenvalues changes sign at a limit point. Recall that the determinant of A is the product of its eigenvalues. Therefore, the following test function can be computed

$$ \varphi_{\texttt{LP}} = \det(f_{x}(x,p)) $$
(21)

without computing the eigenvalues explicitly.

For the LP bifurcation the pseudo‐arclength or Moore–Penrose continuation methods provide an excellent test function as a by‐product of the continuation. Note that while passing through the fold, the last component of the tangent vector ϕ changes sign as the continuation direction in the parameter reverses. The test function is therefore defined as

$$ \varphi_{\texttt{LP}} = \phi_{n+1}\:. $$
(22)

Test Functions for Hopf Bifurcations

Denote the eigenvalues of A by \({\lambda_{i}(x,p)\:, i=1\dotsc,n}\) and consider the following product

$$ \varphi_{\texttt{H}}=\prod_{i<j}(\lambda_{i}(x,p)+\lambda_{j}(x,p))\:. $$

It can be shown that this product has a regular zero at a simple Hopf point [9], but it should be checked that this zero corresponds to an imaginary pair and not to the neutral saddle case \({\lambda_i=-\lambda_j\:,\lambda_i\in\mathbb{R}}\).

Also here one can compute this product without explicit computation of the eigenvalues using the bi‐alternate product [34,42,45,56]. The bi‐alternate product of two \({(n \times n)}\)-matrices A and B, denoted by \({A\odot B}\), is a \({(m \times m)}\)‑matrix C (\({m=n(n-1)/2}\)) with row index \({(i,j)}\) and column index \({(k,l)}\) and elements

$$ \begin{aligned} C_{(i,j)(k,l)}& = \frac{1}{2} \left\{\left| \begin{aligned} a_{\mathrm{ik}} & a_{\mathrm{il}} \\ b_{\mathrm{jk}} & b_{\mathrm{jl}} \end{aligned}\right|+ \left| \begin{aligned} b_{\mathrm{ik}} & b_{\mathrm{il}} \\ a_{\mathrm{jk}} & a_{\mathrm{jl}} \end{aligned} \right|\right\} \\ \mathrm{where} \enskip & \begin{aligned} i&=2,3,\dots,n,\: j=1,2,\dots i-1\:, \\ k&=2,3,\dots,n,\: l=1,2,\dots k-1\:.\end{aligned} \end{aligned}$$

Let A be an \({n \times n}\)-matrix with eigenvalues \({\lambda_{1},\dots,\lambda_{n}}\), then [73]

  • \({A\odot A}\) has eigenvalues \({\lambda_i\lambda_j}\),

  • \({2A\odot I_{n}}\) has eigenvalues \({\lambda_i+\lambda_j}\).

The test function can now be expressed as

$$ \varphi_{\texttt{H}}=\det(2 f_{x}(x,p)\odot I_{n})\:. $$
(23)

For higher dimensional systems, this matrix becomes very large and one should use precondition or subspace methods, see [36].

Test Functions for Codimension 1 Cycle Bifurcations

Recall that the nontrivial multipliers \({\mu_{1}, \dots, \mu_{n-1}}\) determine the stability of the cycle and can be efficiently computed as the nontrivial multipliers of the monodromy matrix M, see Sect. “Discretization of BVPs”. Now the following two sets of test functions can be used to detect LPC, PD and NS bifurcations

$$ \begin{aligned} \varphi_{\texttt{LPC}} & = \prod_{i=1}^{n-1} (\mu_{i}-1)\:,& \varphi_{\texttt{LPC}} =&\, \phi_p\:,\\ \varphi_{\texttt{PD}} &= \prod_{i=1}^{n-1} (\mu_{i}+1)\:,& \varphi_{\texttt{PD}} =& \det(M+I_{n})\:,\\ \varphi_{\texttt{NS}} &= \prod_{1=i<j}^{n-1} (\mu_{i}\mu_{j}-1)\:,& \varphi_{\texttt{NS}} =& \det(M\odot M\\[-5mm]&&&-I_{n(n-1)/2})\:. \end{aligned}$$

where ϕ p denotes the parameter component of the tangent vector similar to (22). It should also be checked that a zero of \({\varphi_{\texttt{NS}}}\) corresponds to nonreal multipliers \({\text{e}^{i\theta_0}}\), similar to the test function to detect the Hopf bifurcation.

There are alternatives for these test functions. One can define bordered systems using the monodromy matrix [9,42] or a BVP formulation [23].

Defining Systems for Codimension 1 Bifurcations of Equilibria

To compute curves of codim 1 equilibria bifurcations, first a defining system of the form (2) needs to be formulated to define the bifurcation curve, and then a second parameter for the continuation must be freed, so that now \({p\in\mathbb{R}^{2}}\). This is done by adding to the equilibrium equation \({f(x,p)=0}\) appropriate equations that characterize the bifurcation.

Defining systems come in two flavors, fully (also standard) and minimally augmented systems. The first computes all relevant eigenspaces, while the latter exploits the rank deficiency of the Jacobi matrix and adds only a few strategic equations to regularize the continuation problem. The evaluation of such equations requires the eigenspaces, but these can be computed separately. As the names suggest, the difference is in the dimension of the defining system leading to differently sized problems. In particular, the advantage of minimally augmented systems is that of solving several smaller linear problems, instead of a big one, which is known to be better in terms of both accuracy and computational time. For small phase dimension n there is little difference in computational effort. Both minimally and fully extended defining systems for both limit point and Hopf bifurcations are presented. The regularity of these systems is also known, e. g., see [42].

Defining Systems for Limit Point Bifurcations

The first defining system is minimally extended adding the test function (21)

$$ \left\{\arraycolsep0.17em\begin{array}{rl} f(x,p)&=0\:,\\ \det(f_{x}(x,p))&=0\:. \end{array}\right. $$
(24)

This system consists of \({n+1}\) equations in \({n+2}\) unknowns \({(x,p)}\). One problem is that the computation of the determinant can lose accuracy for large systems. This can be avoided in two ways, by augmenting the system with the eigenspaces or using a bordering technique.

Fully extended systems include the eigenvectors and for a LP bifurcation this leads to

$$ \left\{\arraycolsep0.17em\begin{array}{rl} f(x,p)&=0\:,\\ f_{x}(x,p)v&=0\:,\\ v_{0}^{\top} v-1&=0\:, \end{array}\right. $$
(25)

where v 0 is a vector not orthogonal to \({\mathcal{N}(f_x(x,p))}\). This system consists of \({2n+1}\) equations in \({2n+2}\) unknowns \({(x,p,v)}\).

The bordering technique uses the bordering lemma [41]. Let \({A\in\mathbb{R}^{n\times n}}\) be a singular matrix and let \({B,C\in\mathbb{R}^{n\times m}}\) such that the system

$$ \left(\begin{matrix}A & B \\ C^{\top} & 0_{m}\end{matrix}\right) \left(\begin{matrix} V \\ g \end{matrix}\right) = \left(\begin{matrix} 0_{n\times m}\\ I_m \end{matrix}\right) $$
(26)

is nonsingular (\({V\in\mathbb{R}^{n\times m}}\), \({g\in\mathbb{R}^{m\times m}}\)). Typically B and C are associated to the eigenspaces of \({A^{\top}}\) and A corresponding to the zero eigenvalue, respectively or, during continuation, approximated by their values computed at the previous point along the branch. It follows from the bordering lemma that A has rank deficiency m if and only if g has rank deficiency m.

With \({A=f_{x}(x,p)}\) and \({m=1}\), one has \({g=0}\) if and only if \({\det(f_{x}(x,p))=0}\). A modified and minimally extended system for limit points is thus given by

$$ \left\{\arraycolsep0.17em\begin{array}{rl} f(x,p)&=0\:,\\ g(x,p)&=0\:,\\ \end{array}\right. $$

where g is defined by (26) with \({A^{\top}B=AC=0}\) at a previously computed point. During the continuation the derivatives of g w.r.t. to x and p are needed. They can either be approximated by finite differences, or explicitly (and efficiently) obtained from the second‐derivatives of the vector field f, see [42].

Defining Systems for Hopf Bifurcations

Defining systems for Hopf bifurcations are formulated analogously to the LP case. Adding the test function (23) creates a minimally extended system

$$ \left\{\arraycolsep0.17em\begin{array}{rl} f(x,p)&=0\:,\\ \det(2 f_{x}(x,p)\odot I_{n})&=0\:, \end{array}\right. $$
(27)

while the fully extended system is given by

$$ \left\{\arraycolsep0.17em\begin{array}{rl} f(x,p)&=0\:,\\ f_{x}(x,p)v_{1}+\omega v_{2}&=0\:,\\ f_{x}(x,p)v_{2}-\omega v_{1}&=0\:,\\ w^{\top}_{1}v_{1}+w^{\top}_{2}v_{2}-1&=0\:,\\ w^{\top}_{1}v_{2}-w^{\top}_{2}v_{1} &=0\:, \end{array}\right. $$
(28)

where \({w=w_{1}+iw_{2}}\) is not orthogonal to the eigenvector \({v=v_{1}+iv_{2}}\) corresponding to the eigenvalue \({i\omega}\). The vector \({w=v^{k-1}}\) computed at the previous point is a suitable choice during continuation. System (28) is expressed using real variables and has \({3n+2}\) equations for \({3n+3}\) unknowns \({(x,p,v_1,v_2,\omega)}\).

A reduced defining system can be obtained from (28) by noting that the matrix \({f_{x}(x,p)^2+\kappa I_n}\) has rank deficiency two at a Hopf bifurcation point with \({\kappa=\omega^{2}}\) [67]. An alternative to (28) is now formulated as

$$ \left\{\arraycolsep0.17em\begin{array}{rl} f(x,p)&=0\:,\\ \left[ f_{x}(x,p)^2 + \kappa I_n \right] v&=0\:,\\ v^{\top} v-1&=0\:,\\ w^{\top} v&=0\:, \end{array}\right. $$
(29)

where w is not orthogonal to the two‐dimensional real eigenspace of the eigenvalues \({\pm i\omega}\). It has \({2n+2}\) equations for \({2n+3}\) unknowns \({(x,p,v,\kappa)}\). However, w needs to be updated during continuation, e. g., as the solution of \({\left([f_{x}(x,p)^2+\kappa I_n]^{\top} w,\,v^{\top} w\right)=(0,\,0)}\) computed at the previous continuation point.

A further reduction is obtained exploiting the rank deficiency. Consider the system

$$ \left(\begin{matrix} f_{x}(x,p)^2 + \kappa I_n& B\\ C^{\top} & 0_{2} \end{matrix}\right) \left(\begin{matrix} V \\ g \end{matrix}\right) = \left(\begin{matrix} 0_{n\times 2} \\ I_{2} \end{matrix}\right) $$

and it follows from the bordering lemma that g vanishes at Hopf points and any two components of g, e. g., g 11 and g 22, see [42], can be taken to augment Eq. (2) to obtain the following minimally augmented system

$$ \left\{\arraycolsep0.17em\begin{array}{rl} f(x,p)&=0\:,\\ g_{11}&=0\:,\\ g_{22}&=0\:, \end{array}\right. $$
(30)

which has \({n+2}\) equations for \({n+3}\) unknowns \({(x,p,\kappa)}\).

Defining Systems for Codimension 1 Bifurcations of Limit Cycles

In principle, to study bifurcations of limit cycles one can compute numerically the Poincaré map and study bifurcations of fixed points. If system (1) is not stiff, then the Poincaré map and its derivatives may be obtained with satisfactory accuracy. In many cases, however, continuation using BVP formulations is much more efficient.

Suppose a cycle x bifurcates at \({p=p_0}\), then the BVP (4) defining the limit cycle must be augmented with suitable extra functions. As for codim 1 branches of equilibria, one can define either fully extended systems by including the relevant eigenfunctions in the computation [9], or minimally extended systems using bordered BVPs [23,39]. The regularity of these defining systems is also discussed in these references. Since the discretization of the cycle leads to large APs, here the minimally extended approach can lead to faster results even though some more algebra is involved, see the comparison in [57]. Below only the equations are presented, which are added to the defining system (4) for the continuation of limit cycles.

Fully Extended Systems

The following equations can be used to augment (4) and continue codim 1 bifurcations of limit cycles. The eigenfunctions v need to be discretized in a similar way to that in Sect. “Discretization of BVPs”. The previous cycle \({x^{k-1}}\) and eigenfunction \({v^{k-1}}\) are assumed to be known.

LPC: For the limit point of cycles bifurcation, the BVP (4) is augmented with the equations

$$ \left\{\arraycolsep0.17em\begin{array}{rl} \dot{v}(\tau)-Tf_{x}(x(\tau),p)v(\tau) -\sigma f(x(\tau),p)&=0\:,\\ v(1)-v(0)&=0\:,\\ \int_{0}^{1} v^{\top}(\tau)\dot{x}^{k-1}(\tau) \mathrm{d}\tau&=0\:,\\ \int_{0}^{1} v^{\top}(\tau)v(\tau)^{k-1} \mathrm{d}\tau + \sigma\sigma^{k-1}&=1\:, \end{array}\right. $$
(31)

for the variables \({(x,p,T,v,\sigma)}\). Note that

$$ \left\{\arraycolsep0.17em\begin{array}{rl} \dot{v}(\tau)-Tf_{x}(x(\tau),p)v(\tau)-Tf_{p}(x(\tau),p)q \\ -\sigma f(x(\tau),p)&=0\:,\\ v(1)-v(0)&=0\:,\\ \int_{0}^{1} v^{\top}(\tau)\dot{x}^{k-1}(\tau) \mathrm{d}\tau&=0\:,\\ \int_{0}^{1} v^{\top}(\tau)v(\tau)^{k-1} \mathrm{d}\tau + q^{k-1}q+\sigma^{k-1}\sigma&=1\:, \end{array}\right. $$

defines the tangent vector \({\phi=(v,q,\sigma)}\) to the solution branch, so that (31) simply imposes \({q=0}\), i. e., the limit point. Together with (4), they compose a BVP with \({2n}\) ODEs, 2n boundary conditions, and 2 integral conditions, i. e., \({n_p=2n+2-2n+1=3}\), namely T and two free parameters. Similar dimensional considerations hold for the PD and NS cases below.

PD: For the period-doubling bifurcation, the extra equations augmenting (4) are

$$ \left\{\arraycolsep0.17em\begin{array}{rl} \dot{v}(\tau)-Tf_{x}(x(\tau),p)v(\tau) &=0\:,\\ v(1)+v(0)&=0\:,\\ \int_{0}^{1} v^{\top}(\tau)v^{k-1}(\tau) \mathrm{d}\tau &=1\:, \end{array}\right. $$
(32)

for the variables \({(x,p,T,v)}\). Here v is the eigenfunction of the linearized ODE associated with the multiplier \({\mu=-1}\). In fact, the second equation in (32) imposes \({v(1)=Mv(0)=-v(0)}\), where M is the monodromy matrix, while the third equation scales the eigenfunction against the previous continuation point.

NS: For the Neimark–Sacker bifurcation, the BVP (4) is augmented with the equations

$$ \left\{\arraycolsep0.17em\begin{array}{rl} \dot{v}(\tau)-Tf_{x}(x(\tau),p)v(\tau)&=0\:,\\ v(1)-\text{e}^{i\theta}v(0)&=0\:,\\ \int_{0}^{1} \bar{v}^{\top}(\tau)v^{k-1}(\tau) \mathrm{d}\tau &=1\:, \end{array}\right. $$
(33)

for the variables \({(x,p,T,v,\theta)}\) with \({v\in\mathcal{C}^1([0,\,1],\mathbb{C}^n)}\). Here v is the eigenfunction of the linearized ODE associated with the multiplier \({\mu=\text{e}^{i\theta}}\). Of course, the real formulation should be used in practice.

Minimally Extended Systems

For limit cycle continuation the discretization of the fully extended BVP (4) with (31), (32) or (33) may lead to large APs to be solved. In [39] a minimally extended formulation is proposed to augmenting (4) with a function g with only a few components. The corresponding function g is defined using bordered systems.

LPC: For this bifurcation, one uses suitable bordering functions \({v_1,w_1}\) and vectors \({v_2,w_2,w_3}\) such that the following system linear in \({(v,\sigma,g)}\) is regular

$$ \left\{\arraycolsep0.17em\begin{array}{rl} \dot{v}(\tau)-Tf_{x}(x(\tau),p)v -f(x(\tau),p)\sigma +w_{1}g &=0\:,\\ v(1)-v(0)+w_2g&=0\:,\\ \int_{0}^{1}f(x(\tau),p)^{\top} v(\tau)\mathrm{d}\tau+w_{3}g &=0\:,\\ \int_{0}^{1}v_1^{\top} v(\tau)\mathrm{d}\tau+v_2\sigma &=1\:. \end{array}\right. $$
(34)

The function \( g=g(x,T,p) \) vanishes at a LPC point. The bordering functions \( v_1,w_1 \) and vectors \( v_2,w_2,w_3 \) can be updated to keep (34) nonsingular, in particular, \( v_1=v^{k-1}\) and \( v_2=\sigma^{k-1}\) from the previously computed point are used. It is convenient to introduce the Dirac operator \( \delta_{i}f=f(i) \) and the integral operator \( \mathrm{Int}_{v(\cdot)}f=\int_{0}^{1}v(\tau)^{\top} f(\tau)\mathrm{d}\tau \) and to rewrite (34) in operator form

$$ \left(\begin{matrix} D-Tf_{x}(x(\cdot),p) & -f(x(\cdot),p) & w_{1} \\ \delta_{0}-\delta_{1} & 0 & w_2 \\ {\mathrm{Int}}_{f(x(\cdot),p)} & 0 & w_{3}\\ {\mathrm{Int}}_{v_{1}(\cdot)} & v_{2} & 0 \end{matrix}\right) \left(\begin{matrix} v \\ \sigma \\ g \end{matrix}\right) = \left(\begin{matrix}{c} 0 \\ 0 \\ 0 \\ 1 \end{matrix}\right)\:. $$
(35)

PD: The same notation as for the minimally extended LPC defining system is used and suitable bordering functions \({v_1,w_1}\) and vector w 2 are chosen such that the following system is regular

$$ \left(\begin{matrix} D-Tf_{x}(x(\cdot),p) & w_{1} \\ \delta_{0}+\delta_{1} & w_2 \\ {\mathrm{Int}}_{v_{1}(\cdot)} & 0 \end{matrix}\right) \left(\begin{matrix} v \\ g \end{matrix}\right) = \left(\begin{matrix} 0 \\ 0 \\ 1 \end{matrix}\right)\:. $$
(36)

At a PD bifurcation \({g(x,T,p)}\) defined by (36) vanishes.

NS: Let \({\hat{\kappa}=\cos(\theta)}\) denote the real part of the nonhyperbolic multiplier and choose bordering functions \({v_1,v_2,w_{11},w_{12}}\) and vectors \({w_{21},w_{22}}\) such that the following system is nonsingular and defines the four components of g

$$ \left(\begin{matrix} D-Tf_{x}(x(\cdot),p) & w_{11} & w_{12}\\ \delta_{2}-2\hat\kappa\delta_{1}+\delta_{0} & w_{21} & w_{22} \\ {\mathrm{Int}}_{v_{1}(\cdot)} & 0 & 0\\ {\mathrm{Int}}_{v_{2}(\cdot)} & 0 & 0 \end{matrix}\right) \left(\begin{matrix} v \\ g \end{matrix}\right) = \left(\begin{matrix} 0_{n\times 2} \\ I_{2} \end{matrix}\right)\:. $$
(37)

At a NS bifurcation the four components of \({g(x,T,p)}\) defined by (37) vanish, and, similar to the Hopf bifurcation, the BVP (4) can be augmented with any two components of g.

Test Functions for Codimension 2 Bifurcations

During the continuation of codim 1 branches, one meets generically codim 2 bifurcations. Some of which arise through extra instabilities in the linear terms, while other codim 2 bifurcations are defined through degeneracies in the normal form coefficients. For equilibria, codim 2 bifurcations of the first type are the Bogdanov–Takens (BT, two zero eigenvalues with only one associated eigenvector), the zero-Hopf (ZH, also called Gavrilov‐Guckenheimer, a simple zero eigenvalue and a simple imaginary pair), and the double Hopf (HH, two distinct imaginary pairs), while higher order degeneracies lead to cusp (CP, \({a_\texttt{LP}=0}\) in the normal form (11)) or generalized Hopf (GH, \({d_\texttt{H}=0}\) in the normal form (12), also called Bautin or degenerate Hopf). For cycles, there are strong resonances (R1R4), fold-flip (LPPD), fold‐Neimark–Sacker (LPNS), flip‐Neimark–Sacker (PDNS), and double Neimark–Sacker (NSNS) among those involving linear terms, while higher order degeneracies lead to cusp (CP), degenerate flip (GPD), and Chenciner (CH) bifurcations. Naturally the normal form coefficients, see [9,56], are a suitable choice for the corresponding test functions. In Tables 2 and 3, test functions are given which are defined along the corresponding codim 1 branches of equilibrium and limit cycle bifurcations, respectively. The functions refer to the corresponding defining system and to Table 1. Upon detecting and locating a zero of a test function it may be necessary to check that a bifurcation is really involved, similar to the Hopf case where neutral saddles are excluded. For details about the dynamics and the bifurcation diagrams at codim 2 points, see [2,44,56].

Table 2 Test functions along limit point and Hopf bifurcation curves. The matrix \({A_{c}}\) for the test function of the double Hopf bifurcation can be obtained as the orthogonal complement in \({\mathbb{R}^{n}}\) of the Jacobi matrix A w.r.t. the two‐dimensional eigenspace associated with the computed branch of Hopf bifurcations
Table 3 Test functions along LPC, PD and NS bifurcation curves. The matrix \({M_{c}}\) along the Neimark–Sacker bifurcation curve is defined similarly as \({A_{c}}\) in Table 2 along the Hopf bifurcation as the orthogonal complement of the monodromy matrix M w.r.t. the two‐dimensional eigenspace associated with the computed branch of Neimark–Sacker bifurcations

Branch Switching

Figure 4
figure 4

(a) Projection of two solution branches of (2), intersecting at \({y^{\texttt{BP}}}\), on the null-space of \({F_y(y^{\texttt{BP}})}\) (\({\mathcal{N}}\)), close to \({y^{\texttt{BP}}}\) (planar representation in coordinates \({(\alpha_{1},\alpha_{2})}\) with respect to a given basis). The two solution branches are approximated by the straight lines in \({\mathcal{N}}\) spanned by their tangent vectors at \({y^{\texttt{BP}}}\) (thick vectors). (b) and (c) Projection on \({\mathcal{N}}\), close to \({y^{\texttt{BP}}}\), of the two solution branches of the perturbed problem (see (43) for \({b> 0}\) and \({b<0}\), respectively)

This section considers points in the continuation space from which several solution branches of interest, with the same codimension, emanate. At these points suitable “branch switching” procedures are required to switch from one solution branch to another. First, the transversal intersection of two solution branches of the same continuation problem is considered, which occurs at so‐called Branch Points (BP) (also called “singular” or “transcritical” bifurcation points). Branch points are nongeneric, in the sense that arbitrarily small perturbations of F in (2) turn the intersection into two separated branches, which come close to the “ghost” of the (disappeared) intersection but then fold (LP) and leave as if they follow the other branch (see Fig. 4).

BPs, however, are very common in applications due to particular symmetries of the continuation problem, like reflections in state space, conserved quantities or the presence of trivial solutions. This is why BP detection and continuation recently received attention [16,25,28]. Then, the switch from a codim 0 solution branch to that of a different continuation problem at codim 1 bifurcations is examined. In particular, the equilibrium-to-cycle switch at a Hopf bifurcation and the period-1-to‐period-2 cycle switch at a flip bifurcation are discussed. Finally, various switches between codim 1 solution branches of different continuation problems at codim 2 bifurcations are addressed.

Branch Switching at Simple Branch Points

Simple BPs are points \({y^{\texttt{BP}}=(u^{\texttt{BP}},p^{\texttt{BP}})}\), encountered along a solution branch of (2), at which the nullspace \({\mathcal{N}(F_y(y))}\) of \({F_y(y)}\) is two‐dimensional, i. e., the nullspace is spanned by two independent vectors \({\phi_1,\phi_2\in Y}\), with \({\phi_i^{\top} \phi_i=1}\), \({i=1,2}\). Generically, two solution branches of (2) pass through \({y^{\texttt{BP}}}\), with transversal tangent vectors given by suitable combinations of ϕ1 and ϕ2. In the following, only the case of the AP (2), i. e., \({Y={\mathbb R}^{n+1}}\) (\({u\in\mathbb{R}^n}\) and \({p\in\mathbb{R}}\)) and \({F\colon\mathbb{R}^{n+1}\to{\mathbb R}^{n}}\) is considered. Similar considerations hold for the BVP (3) (see [16] for details), though, loosely speaking, results for APs can be applied to BVPs after time discretization.

BP is not a regular point, since rank \( (F_{y}(y^{\texttt{BP}}))=n-1 \). Distinguishing between state and parameters, there are two possibilities

$$ \left\{\arraycolsep0.17em\begin{array}{ll} (\text{i}) & {\text{dim}}\mathcal{N}(F_u(y^{\texttt{BP}}))=1,F_{p}(y^{\texttt{BP}})\in\mathcal{R}(F_{u}(y^{\texttt{BP}}))\\ & \Longrightarrow \phi_{1}=(v_{1},0),\phi_{2}=(v_{2},q_{2}),\\ (\text{ii}) & {\text{dim}}\mathcal{N}(F_u(y^{\texttt{BP}}))=2,F_{p}(y^{\texttt{BP}})\notin\mathcal{R}(F_{u}(y^{\texttt{BP}}))\\ & \Longrightarrow \phi_{1}=(v_{1},0),\phi_{2}=(v_{2},0)\:, \end{array}\right. $$
(38)

for suitably chosen \({v_{1},v_{2},q_{2}}\). In particular, in the first case, v 1 spans the nullspace of \( F_u(y^{\texttt{BP}}) \) and ϕ2 is determined by solving \( \left(F_u(y^{\texttt{BP}})v_2+F_p(y^{\texttt{BP}})q_2,v_{1}^{\top} v_{2}, v_{2}^{\top} v_{2}+q_{2}^{2}\right)=(0,0,1) \).

BPs can be detected by means of the following test function

$$ \varphi_{\texttt{BP}}=\det\left(\left[\begin{matrix} F_y(y)\\ \phi^{\top} \end{matrix}\right]\right)\:, $$
(39)

where ϕ is the tangent vector to the solution branch during continuation, which indeed vanishes when (2) admits a second independent tangent vector. Note from (38) that test function (21) also vanishes at BPs, so that test function (22) is more appropriate for LPs.

The vectors tangent to the two solution branches intersecting at \({y^{\texttt{BP}}}\) can be computed as follows. Parametrize one of the two solution branches by a scalar coordinate s, e. g., the arclength, so that y(s) and \({y_s(s)}\) denote the branch and its tangent vector locally to \({y(0)=y^{\texttt{BP}}}\). Then, \({F(y(s))}\) is identically equal to zero, so taking twice the derivative w.r.t. s one obtains \( F_{\mathrm{yy}}(y(s))[y_s(s),y_s(s)]+F_y(y(s))y_{\mathrm{ss}}(s)=0 \), which at \({y^{\texttt{BP}}}\) reads

$$ F_{\mathrm{yy}}(y^{\texttt{BP}})[y_s(0),y_s(0)]+F_y(y^{\texttt{BP}})y_{\mathrm{ss}}(0)=0\:, $$
(40)

with \({y_s(0)=\alpha_1\phi_1+\alpha_2\phi_2}\). Let \({\psi\in\mathbb{R}^n}\) span the nullspace of \({F_y(y^{\texttt{BP}})^{\top}}\) with \({\psi^{\top}\psi=1}\). Since the range of \({F_y(y^{\texttt{BP}})}\) is orthogonal to the nullspace of \({F_y(y^{\texttt{BP}})^{\top}}\), one can eliminate \({y_{\mathrm{ss}}(0)}\) in (40) by left‐multiplying both sides by \({\psi^{\top}}\), thus obtaining

$$ \psi^{\top} F_{\mathrm{yy}}(y^{\texttt{BP}})[\alpha_1\phi_1+\alpha_2\phi_2,\alpha_1\phi_1+\alpha_2\phi_2]=0\:. $$
(41)

Equation (41) is called the algebraic branching equation  [50] and is often written as

$$ c_{11}\alpha_1^2+2c_{12}\alpha_1\alpha_2+c_{22}\alpha_2^2=0\:, $$
(42)

with \({c_{\mathrm{ij}}=\psi^{\top} F_{\mathrm{yy}}(y^{\texttt{BP}})[\phi_i,\phi_j],i,j=1,2}\). At BP detection, the discriminant \({c_{11}c_{22}-c_{12}^2}\) is generically negative (otherwise the BP would be an isolated solution point of (2)), so that two distinct pairs \({(\alpha_1,\alpha_2)}\) and \({(\tilde\alpha_1,\tilde\alpha_2)}\), uniquely defined up to scaling, solve (42) and give the directions of the two emanating branches.

Once the two directions are known, one can easily perform branch switching by an initial prediction from \({y^{\texttt{BP}}}\) along the desired direction. This, however, requires the second‐order derivatives of F w.r.t. all continuation variables. Though good approximations can often be achieved by finite differences, an alternative and computationally cheap prediction can be taken in the nullspace of \({F_y(y^{\texttt{BP}})}\) along the direction orthogonal to \({y_s(0)}\). The vector \({y_s(s)}\) is in fact known at each point during the continuation of the solution branch up to BP detection, so that the cheap prediction for the other branch spans the (one‐dimensional) nullspace of

$$ \left[\begin{matrix} F_y(y^\texttt{BP})\\ y_s(0)^{\top}\end{matrix}\right]\:. $$

Branch Point Continuation

Generic Problems

Several defining systems have been proposed for BP continuation, see [16,25,28,63,64,65]. Among fully extended formulations, the most compact one characterizes BPs as points at which the range of \({F_y(y)}\) has rank defect 1, i. e., the nullspace of \({F_y(y)^{\top}}\) is one‐dimensional. BP continuation is therefore defined by

$$ \left\{\arraycolsep0.17em\begin{array}{rl} F(u,p) & = 0\:,\\ F_u(u,p)^{\top}\psi & = 0\:,\\ F_p(u,p)^{\top}\psi & = 0\:,\\ \psi^{\top}\psi-1 & = 0\:. \end{array}\right. $$

Counting equations, \({2n+2}\), and variables \({u,\psi\in\mathbb{R}^n}\), \({p\in\mathbb{R}}\), i. e., \({2n+1}\) scalar variables, it follows that two extra parameters generically need to be freed. In other words, BPs are codim 2 bifurcations, which are not expected along generic solution branches of (2).

Non-generic Problems

BP continuation can be performed in a single extra free parameter for nongeneric problems characterized by symmetries that persist for all parameter values. In such cases, the continuation problem (2) is perturbed into

$$ F(y)+bu_b=0\:, $$
(43)

where \({b\in\mathbb{R}}\) and \({u_b\in\mathbb{R}^n}\) are new variables of the defining system. The idea is that u b “breaks the symmetry”, in the sense that problem (43) has no BP for small \({b\neq 0}\), and BP continuation can be performed in two extra free parameters, one of which, b, remains zero during the continuation. The choice of u b is not trivial. Geometrically, u b must be such that small values of b perturb Fig. 4a into Fig. 4b, say for \({b> 0}\), and into Fig. 4c for \({b<0}\). It turns out (see, e. g., [16]) that \({u_b = \psi}\) is a good choice, i. e., perturbations not in the range of \({F_y(y^{\texttt{BP}})}\) break the symmetry, since close to the BP, they must be balanced by the nonlinear terms of the expansion of F in (43), and this implies significant deviations of the perturbed solution branch y from the unperturbed y(s).

BP continuation for nongeneric problems is therefore defined by

$$ \left\{\arraycolsep0.17em\begin{array}{rl} F(x,p)+b\psi &= 0\:,\\ F_x(x,p)^{\top}\psi & = 0\:,\\ F_p(x,p)^{\top}\psi & = 0\:,\\ \psi^{\top}\psi-1 & = 0\:. \end{array}\right. $$
(44)

This defining system is also useful for accurately computing BPs. In fact, the basin of convergence of the Newton iterations in (8) or (9) shrinks at BPs (recall \({F_y(y)}\) does not have full rank at BPs), while system (44), in the \({2n+2}\) variables \({(u,p,b,\psi)}\), has a unique solution \({(u^{\texttt{BP}},p^{\texttt{BP}},0,\psi)}\) close to the BP. Thus, when the BP test function (39) changes sign along a solution branch of (2), Newton corrections can be applied to (44), starting from the best possible prediction, i. e., with \({b=0}\) and ψ as the eigenvector of \({F_u(u,p)^{\top}}\) associated with the real eigenvalue closest to zero.

Minimally Extended Formulation

A minimally extended defining system for BP continuation requires two scalar conditions, \({g_1(u,p)=0}\), \({g_2(u,p)=0}\), to be added to the unperturbed or perturbed problem (2) or (43) for generic and nongeneric problems, respectively. These functions g 1 and g 2 are defined in [28] by solving

$$ \left\{\arraycolsep0.20em\begin{array}{rl} F_y(y)\phi_1 +g_{1}\psi^{k-1}&=0\:,\\ F_y(y)\phi_2 +g_{2}\psi^{k-1}&=0\:,\\ (\phi_1^{k-1})^{\top}\phi_1-1&=0\:,\\ (\phi_2^{k-1})^{\top}\phi_2-1&=0\:,\\ (\phi_1^{k-1})^{\top}\phi_2 &=0\:,\\ (\phi_2^{k-1})^{\top}\phi_1 &=0\:, \end{array}\right. $$

in the unknowns \({\phi_{1},\phi_{2},g_1,g_2}\), while ψ is updated by solving

$$ \left\{\arraycolsep0.17em\begin{array}{rl} F_y(y)^{\top}\psi +g_1\phi_1+g_2\phi_2&=0\:,\\ \psi^{\top}\psi-1&=0\:, \end{array}\right. $$

in the unknowns ψ, g 1, g 2, after each Newton convergence.

Branch Switching at Hopf Points

At a Hopf bifurcation point \({y^{\texttt{H}}=(x^{\texttt{H}},p^{\texttt{H}})}\), one typically wants to start the continuation of the emanating branch of limit cycles. For this, one might think of using the branch switching procedure described above to switch from a constant to a periodic solution branch of the limit cycle BVP (4). Unfortunately, \({y^{\texttt{H}}}\) is not a simple BP for problem (4), since the period T is undetermined along the constant solution branch, so that, formally, an infinite number of branches emanate from \({y^{\texttt{H}}}\). Thus, a prediction in the proper direction, i. e., along the vector \({\phi=(v,q)}\) tangent to the periodic solution branch, is required.

Let y(s) represent the periodic solution branch, with \({y(0)=y^{\texttt{H}}}\). Then, x and v are period-1 vector‐valued functions in \({\mathcal{C}^1([0,\,1],\mathbb{R}^n)}\), \({p\in\mathbb{R}}\) and T are the free parameters, and \({q=(p_s,T_s)}\). The Hopf bifurcation theorem [56] ensures that \({p_s=T_s=0}\) and that v is the unit‐length solution of the linearized, time‐independent equation \( \dot{v}=T(0)f_x(x^{\texttt{H}},p^{\texttt{H}})v \), i. e., \( v(\tau)=\sin(2\pi \tau)w_r+\cos(2\pi \tau)w_i \), where \({w=w_r+iw_i}\) (\( w_r^{\top} w_r+w_i^{\top} w_i=1,w_r^{\top} w_i=0 \)) is the complex eigenvector of \( f_x(x^{\texttt{H}},p^{\texttt{H}}) \) associated to the eigenvalue \({i\omega}\), \({\omega=2\pi/T(0)}\).

The periodic solution branch of the limit cycle BVP (4) can therefore be followed, provided the phase condition (see the last equation in (4)) is replaced by \({\int_0^1x^{\top}\dot{v}\mathrm{d}\tau=0}\) at the first Newton correction. Otherwise, x would be undetermined among time‐shifted solutions.

Branch Switching at Flip Points

At a flip bifurcation point \({y^{\texttt{PD}}=(x^{\texttt{PD}},p^{\texttt{PD}})}\), where \( x^{\texttt{PD}}\in\mathcal{C}^1([0,\,1],\mathbb{R}^n) \), \({x^{\texttt{PD}}(1)=x^{\texttt{PD}}(0)}\), one typically wants to start the continuation of the emanating branch of “period-2” limit cycles, i. e., those which close to \({y^{\texttt{PD}}}\) have approximately the double of the period of the bifurcating cycle. For this, branch switching at simple BPs can be used. In fact, two solution branches of the limit cycle BVP (4) transversely intersect at \({y^{\texttt{PD}}}\) if one considers T as the doubled period: the branch of interest and the branch along which the corresponding period-1 cycle is traced twice. In other words, one can see the period‐doubling bifurcation in the period-1 branch as the “period‐halving” bifurcation in the period-2 branch.

Alternatively, the vector \({\phi=(v,q)}\) tangent to the period-2 branch at \({y^{\texttt{PD}}}\) is given by the flip theorem [56] and does not need to be computed by solving the algebraic branching Eq. (41). In particular, the initial solution of the period-2 BVP is \({x(t)=x^{\texttt{PD}}(2t)}\), \({p=p^{\texttt{PD}}}\), \({T=2T^{\texttt{PD}}}\), while \({q=(p_s,T_s)=0}\) and

$$ v(t)=\begin{cases} w(t)\:, & 0 \le t < 1\:,\\ -w(t-1)\:, & 1 \le t < 2\:, \end{cases}$$

where w(t) is the unit‐length eigenfunction of the linearized (time‐dependent) ODE associated with the multiplier − 1, i. e.,

$$ \left\{\arraycolsep0.17em\begin{array}{rl} \dot{w}-T^{\texttt{PD}}f_x(x^{\texttt{PD}},p^{\texttt{PD}})w & = 0\:,\\ w(1)+w(0) & = 0\:,\\ \int_0^1w(\tau)^{\top} w(\tau)\,\mathrm{d}\tau &= 1\:. \end{array}\right. $$

Branch Switching at Codimension 2 Equilibria

Assume that \({y_2=(x_2,p_2)}\), \({x_2\in\mathbb{R}^n}\), \({p_2\in\mathbb{R}^2}\), identifies a codim 2 equilibrium bifurcation point of system (1), either cusp (CP), Generalized Hopf (GH), Bogdanov–Takens, (BT), zero-Hopf (ZH), or double Hopf (HH). Then, according to the analysis of the corresponding normal form (13) with two parameters and critical dimension \({n_c=1,2,3,4}\) at CP, BT and GH, ZH, HH points, respectively, several curves of codim 1 bifurcations of equilibria and limit cycles emanate from p 2 in the parameter plane [56]. Here, the problem of how the continuation of such curves can be started from y 2 is discussed, by restricting the attention to equilibria bifurcations (an example of a cycle bifurcation is given at the end, and see [9,61]).

In general, the normal form analysis also provides an approximation of each emanating codim 1 bifurcation, in the form of a parameterized expansion

$$ w=\sum_{\mu\geq 1}\frac{1}{\mu!}w_{\mu}\varepsilon^{\mu}\:,\quad \alpha=\sum_{\nu\geq 1}\frac{1}{\nu!}\alpha_\nu\varepsilon^\nu\:, $$
(45)

for small \({\varepsilon> 0}\) and up to some finite order. Then, the (parameter‐dependent) center manifold (15) maps such an approximation back to the solution branch of the original system (1), locally to \({y_2=(H(0,0),V(0))}\), and allows one to compute the proper prediction from y 2 along the direction of the desired codim 1 bifurcation. However, as will be concluded in the following, such approximations are not needed to start equilibria bifurcations and are therefore not derived (see [9] for all available details).

Switching at a Cusp Point

At a generic cusp point \({y^{\texttt{CP}}=(u^{\texttt{CP}},p^{\texttt{CP}})}\), two fold bifurcation curves terminate tangentially. Generically, the cusp point is a regular point for the fold defining systems (24) and (25), where the tangent vector has zero p‑components. The cusp geometrically appears once the fold solution branch is projected in the parameter plane. Thus, the continuation of the two fold bifurcations can simply be started as forward and backward fold continuation from \({y^{\texttt{CP}}}\). Since the continuation direction is uniquely defined, neither a tangent prediction nor a nonlinear expansion of the desired solution branch are necessary.

Switching at a Generalized Hopf Point

At a Hopf point with vanishing Lyapunov coefficient \({y^{\texttt{GH}}=(x^{\texttt{GH}},p^{\texttt{GH}})}\), a LPC bifurcation terminates tangentially to a Hopf bifurcation, which turns from super- to subcritical, and vice versa. Thus, \({y^{\texttt{GH}}}\) is a regular point for the Hopf defining systems (27)–(30).

Switching at a Bogdanov–Takens Point

At a generic Bogdanov–Takens point \({y^{\texttt{BT}}=(u^{\texttt{BT}},p^{\texttt{BT}})}\), a Hopf and a (saddle) homoclinic bifurcation terminate tangentially to a fold bifurcation, along which a real nonzero eigenvalue of the Jacobi matrix \({f_x(x,p)}\) changes sign at \({y^{\texttt{BT}}}\). Generically, \({y^{\texttt{BT}}}\) is a regular point for the fold defining systems (24) and (25) and for the Hopf defining systems (27), (29), and (30). However, \({y^{\texttt{BT}}}\) is a simple BP for the Hopf defining system (28). In fact, the fold and Hopf branches are both solution branches of the continuation problem (28), where \({\omega=0}\) and \({v_1=v_2}\), with \({v_1^{\top} v_1=v_2^{\top} v_2=1/2}\), along the fold branch. The branch switch procedures described in this section readily apply in this case.

Switching at Zero-Hopf and Double Hopf Points

Generically, zero-Hopf and double Hopf points are regular points for codim 1 equilibria bifurcations (fold and Hopf), so that the proper initial prediction is uniquely defined. For limit cycle bifurcations and connecting orbits, nonlinear expansions of the fold and Hopf branches are needed to derive initial predictions for the emanating branches. For cycles the switching procedure can be set up using the center manifold [61]. However, initial predictions for homoclinic and heteroclinic bifurcations for both zero-Hopf and double Hopf cases are not available in general, but see [35].

The double Hopf bifurcation appears when two different branches of Hopf bifurcations intersect. Several bifurcation curves are rooted at the double Hopf point. In particular, it is known that there are generically two branches, two half‐lines in the parameter plane, emanating from this point along which a Neimark–Sacker bifurcation of limit cycles occurs [56]. Here it is discussed how to initialize the continuation of a Neimark–Sacker branch (using (37)) from a double Hopf point after continuation of a Hopf branch. The initialization requires approximations of the cycle x, the period T, the parameters p and the real part of the multiplier \({\hat{\kappa}}\). These can be obtained by reducing the dynamics of (1) to the center manifold. On the center manifold, the dynamics near a HH bifurcation point is governed by the following normal form

$$ \begin{aligned} & \bgroup\begin{pmatrix}\dot{w_{1}} \\ \dot{w_{2}} \end{pmatrix}\egroup = \\ &\bgroup\begin{pmatrix} (i\omega_{1}(\alpha)+\alpha_{1})w_{1} + g_{2100}w_{1}|w_{1}|^{2} + g_{1011}w_{1}|w_{2}|^{2}\\ (i\omega_{2}(\alpha)+\alpha_{2})w_{2} + g_{1110}w_{2}|w_{1}|^{2} + g_{0021}w_{2}|w_{2}|^{2} \end{pmatrix}\egroup \\ & +O(\|\:(w_{1},w_{2})\|^{4})\:, \end{aligned} $$
(46)

where \( (w_1,w_2)\in\mathbb{C}^{2}\). In polar coordinates, \( w_{1} = \rho_{1}{\mathrm{e}}^{i\theta_{1}}, w_{2} = \rho_{2}{\mathrm{e}}^{i\theta_{2}}\), the asymptotics from the normal form as in (45) for the nonhyperbolic cycle on one branch are given by

$$ (\rho_{1},\rho_{2},\alpha_{1},\alpha_{2}) \\ = \left(\varepsilon,0,-{\mathbb R}e(g_{2100})\varepsilon^2, -\Re(g_{1110})\varepsilon^2\right)\:, $$
(47)

with \({\theta_{1}\in[0,2\pi],\theta_2=0}\).

Although high‐dimensional, the computation of the coefficients and center manifold vectors is relatively straightforward. Introduce \({Av_{j}=i\omega_{j}v_{j}, A^{\top} w_{j}=-i\omega_{j}w_{j}}\) with \({\bar{v}_{j}^{\top} v_{j}=\bar{w}_{j}^{\top} v_{j}=1}\) and let \({\nu=(10),(01)}\) and introduce the standard basis vectors \({e_{10}=(1,0),e_{01}=(0,1)}\). Using the expansion (14), the cubic critical normal form coefficients and the parameter dependence are calculated from

$$ \begin{aligned} g_{2100} &= \bar{w}_{1}^{\top}[C(v_{1},v_{1},\bar{v}_{1})+B(h_{2000},\bar{v}_{1}) \\ &\qquad + 2B(h_{1100},v_{1})]\big/2\:,\\ g_{1011} &= \bar{w}_{1}^{\top}[C(v_{1},v_{2},\bar{v}_{2})+B(h_{1010},\bar{v}_{2}) \\ &\qquad+ B(h_{1001},v_{2})+B(h_{0011},v_{1})], \\ g_{1110} &= \bar{w}_{2}^{\top}[C(v_{2},v_{1},\bar{v}_{1})+B(h_{1100},v_{2})+B(h_{1010},\bar{v}_{1}) \\ &\qquad+ B(\bar{h}_{1001},v_{1})]\:,\\ g_{0021} &= \bar{w}_{2}^{\top}[C(v_{2},v_{2},\bar{v}_{2})+B(h_{0020},\bar{v}_{2})\\ &\qquad +2B(h_{0011},v_{2})]/2\:,\\ \Gamma_{j,\nu} &= \bar{p}_{j}^{\top}[A_{1} (v_{j},e_{\nu}) - B(v_{j},A^{-1}J_{1}e_{\nu})]\big/2\:, \end{aligned}$$
(48)

where \({j=1,2}\) and

$$ \begin{aligned} h_{2000} & = (2i\omega_{1}I_{n}-A)^{-1}B(v_{1},v_{1})\:,\\ h_{0020} & = (2i\omega_{2}I_{n}-A)^{-1}B(v_{2},v_{2})\:, \\ h_{1100} & = -A^{-1}B(v_{1},\bar{v}_{1})\:, \\ h_{0011} & = -A^{-1}B(v_{2},\bar{v}_{2})\:, \\ h_{1010} & = (i(\omega_{1}+\omega_{2})I_{n}-A)^{-1}B(v_{1},v_{2})\:, \\ h_{1001} & = (i(\omega_{1}-\omega_{2})I_{n}-A)^{-1}B(v_{1},\bar{v}_{2})\:, \end{aligned}$$
(49)

where h μ are the vectors in the expansion of the center manifold (17).

Now, to construct a cycle for (46), a mesh for θ1 is defined and the asymptotics (47) are inserted into the polar coordinates. This cycle is mapped back to the original system by using (17) with (47), (48) and (49). The transformation between free system and unfolding parameters is given by \({p(\varepsilon)={\mathcal R}(\Gamma_{1} \Gamma_{2})^{-1}(\alpha_{1},\alpha_{2})^{\top}}\) using (47). Finally, approximating formulas for the period and the real part of the multiplier are given by

$$ \begin{aligned} T&=\frac{2\pi}{\omega_{1}+\mathrm{d}\omega_{1}\varepsilon^{2}}\:,\quad \hat{\kappa}=\cos(T(\omega_{2}+\mathrm{d}\omega_{2}\varepsilon^{2}))\:,\\ (\mathrm{d}\omega_{1},\mathrm{d}\omega_{2}) &= -\Im(\Gamma_{1} \Gamma_{2})^{\top} (\Re(\Gamma_{1} \Gamma_{2})^{\top})^{-1}\Re(g_{2100},g_{1110})^{\top} \\ &\qquad + \Im(g_{2100},g_{1110})\:, \end{aligned}$$

where \({\mathrm{d}\omega_1,\mathrm{d}\omega_2}\) indicate the change in rotation in the angles \({\theta_1,\theta_2}\) for \({\varepsilon\neq 0}\). This construction is done up to second order in ε and leads to an initial approximation for the continuation of a Neimark–Sacker bifurcation curve starting from a double Hopf point. A similar set up can be defined for the other branch.

Connecting Orbits

Connecting orbits, such as homoclinic and heteroclinic orbits, can be continued using a variety of techniques. To fix some terminology: a heteroclinic orbit that connects two equilibrium points \({x_-}\) and \({x_+}\) in the ODE system (1) is an orbit for which

$$ \lim_{t\to -\infty} x(t) =x_- \quad \text{and} \quad \lim_{t\to\infty}x(t)=x_+\:. $$

A homoclinic orbit is an orbit connecting an equilibrium point to itself, that is, if \({x_+=x_-}\). Similarly there exist heteroclinic connecting orbits between equilibrium points and periodic orbits, and homoclinic and heteroclinic orbits connecting periodic orbits to periodic orbits.

Traditionally, homoclinic orbits to equilibrium points were computed indirectly using numerical shooting or by continuing a periodic solution with a large enough but fixed period, that is close enough to a homoclinic orbit [24]. More modern and robust techniques compute connecting orbits directly using projection boundary conditions. Computing connections with and between periodic orbits is subject to current research [26,27,54]. For a more detailed description of the methods described here, see [9].

Formulation as a BVP

A heteroclinic orbit can be expressed as a BVP in the following way:

$$ \begin{aligned} \dot{x}(t)&=f(x(t),p)\:,\quad \lim_{t\to -\infty} x(t)=x_-\:,\quad \\\lim_{t\to \infty}x(t)&=x_+\:,\quad \int_{-\infty}^{\infty}(x(t)-x_{0}(t))^{\top}\dot{x}_{0}(t)\mathrm{d}t=0\:, \end{aligned}$$

where the integral condition is with respect to a reference solution \({x_{0}(t)}\) and fixes the phase, similarly to the phase condition for periodic orbits. This BVP, however, operates on an infinite interval, while, numerically, one can only operate on a finite, truncated interval \({[-T_-,T_+]}\). In this case the problem can be reformulated as

$$ \left\{\arraycolsep0.17em\begin{array}{rl} \dot{x}(t)-f(x(t),p)&=0\:,\\ L_s(p)(x(-T_-)-x_-(p))&=0\:,\\ L_u(p)(x(T_+)-x_+(p))&=0\:,\\ \int_{-T_-}^{T_+}(x(t)-x_{0}(t))^{\top}\dot{x_{0}}(t)\mathrm{d}t&=0\:, \end{array}\right. $$

where the equations involving \({L_s(p)}\) and \({L_u(p)}\) form the projection boundary conditions. Here \({L_s(p)}\) is an \({n_s\times n}\) matrix where the rows span the n s -dimensional stable eigenspace of \({A^{\top}(x_-)}\), and similarly \({L_u(p)}\) is an \({n_u\times n}\) matrix where the rows span the n u -dimensional unstable eigenspace of \({A^{\top}(x_+)}\), where A(x) denotes the Jacobi matrix of (1) at x. The projection boundary conditions then ensure that the starting point \({x(-T_-)}\) lies in the unstable eigenspace of \({x_-}\) and that the end point \({x(T_+)}\) lies in the stable eigenspace of \({x_+}\) (see Fig. 5).

Figure 5
figure 5

Projection boundary conditions in two dimensions: the orbit homoclinic to x 0 is approximated by the truncated orbit from \({x(-T_-)}\) on the unstable eigenspace \({T^\text{u}}\) to \({x(T_+)}\) on the stable eigenspace \({T^\text{s}}\). The unstable and stable manifolds are denoted by \({W^\text{u}}\) and \({W^\text{s}}\), respectively: this figure shows that the homoclinic orbit is also approximated in parameter space, because the two manifolds do not coincide

In parallel, one must also continue of the equilibrium points \({x_-(p)}\) and \({x_+(p)}\), unless they are fixed. Then the eigenspaces can be determined by doing a Schur decomposition of the corresponding Jacobi matrices. These must be subsequently scaled to ensure continuity in the parameter p [6]. Alternatively, one can construct smooth projectors using an algorithm for the continuation of invariant subspaces which includes the Ricatti equation in the defining system, see [33] for this new method. All these conditions taken together give a BVP with two free parameters, since, in general, a homoclinic or heteroclinic connection is a codim 1 phenomenon.

Detecting Homoclinic Bifurcations

It is then possible to detect codim 2 bifurcations of homoclinic orbits by setting up test functions and monitoring those, detecting when they cross zero. For the continuation of these codim 2 bifurcations in three parameters such test functions can then be kept constant equal to zero, which provides an extra boundary condition. Simple test functions involve the values of leading eigenvalues of the equilibrium point, i. e., the stable and unstable eigenvalues closest to the imaginary axis. Some other bifurcations such as the inclination flip involve solving the so‐called adjoint variational equation, which can detect whether the flow around the orbit is orientable or twisted like a Möbius strip. Homoclinics to a saddle‐node, that is, where one of the eigenvalues of the equilibrium is zero, can be detected and followed similarly, by constructing appropriate test functions. For details, see [9] and the references mentioned therein.

Homoclinic Branch Switching

It is sometimes desirable to do branch switching from a homoclinic orbit to an n‑homoclinic orbit, that is, a homoclinic orbit that goes through some neighborhood of the related equilibrium point \({n-1}\) times before finally converging to the equilibrium. Such n‑homoclinic orbits arise in a number of situations involving the eigenvalues of the equilibrium and the orientation of the flow around the orbit. Suppose that the orbit is homoclinic to a saddle with complex conjugate eigenvalues (a saddle-focus). Let the saddle-quantity \({\sigma}\) be the sum of the real parts of the leading stable and the leading unstable eigenvalue. If this quantity is positive, then a so-called Shil'nikov snake exists, which implies the existence of infinitely many n‑periodic and n‑homoclinic orbits for nearby parameters. These n‑homoclinic orbits also arise from certain codim 2 bifurcations:

  1. 1.

    Belyakov bifurcations: Either the saddle-quantity of the saddle-focus goes through zero, or there is a transition between a saddle-focus and a real saddle (here the equilibrium has a double leading eigenvalue).

  2. 2.

    Homoclinic flip bifurcations: The inclination flip and the orbit flip, where the flow around the orbit changes between orientable and twisted.

  3. 3.

    The resonant homoclinic doubling: A homoclinic orbit for which the flow is twisted connected to a real saddle where the saddle quantity \({\sigma}\) goes through zero.

Case 1. produces infinitely many n‑homoclinic orbits, whereas cases 2. and 3. only produce a two-homoclinic orbit.

By breaking up a homoclinic orbit globally into two parts where the division is in a cross‐section away from the equilibrium, somewhere “half‐way”, and then gluing pieces together, it is possible to construct n‑homoclinic orbits from 1‑homoclinic orbits. The gaps between to-be-glued pieces can be well‐defined using Lin's method, which leaves the gaps in the direction of the adjoint vector. This vector can be found by solving the adjoint variational equation mentioned above at the “half‐way” point. Combining gap distances and times taken for the flow in pieces, one can construct a well-posed BVP. This BVP can then be continued in those quantities so that if the gap sizes go to zero, one converges to an n‑homoclinic orbit [66].

Lin's method was also applied in [54] to compute point-to-cycle connections by gluing a piece from the equilibrium to a cross‐section to a piece from the cross section to the cycle.

Software Environments

This review has outlined necessary steps and suitable methods to perform numerical bifurcation analysis. Summarizing, the following subsequent tasks can be recognized.

Initial procedure

1. Compute the invariant solution

 

2. Characterize the linearized behavior

Continuation

3. Variation of parameters

 

4. Monitor dynamical indicators

Automated analysis

5. Detect special points and compute normal forms

 

6. Switch branches

Ideally, these computations are automatically performed by software and indeed, many efforts have been spent on implementing the algorithms mentioned in this review and related ones. With the appearance of computers at research institutes the first codes and noninteractive packages were developed. For a recent historical overview of these and their capabilities and algorithms, see [38]. Here it is worthwhile to mention auto86 [24], linblf [51] and dstool [4] as these are the predecessors to the packages auto-07p  [21], content  [58], matcont  [19], pydstool  [13] discussed here. In particular, auto86 is very powerful and in use to date, but not always easy to handle. Therefore, several attempts have been made to make an interactive graphical user interface. One example is xppaut [32] which is a standard tool in neuroscience.

The latest version of auto is auto-07p and written in Fortran and supports parallelization. It uses pseudo‐arclength continuation and performs a limited bifurcation analysis of equilibria, fixed points, limit cycles and connecting orbits using homcont [12]. A recent addition is the continuation of branch points [16]. It uses fully extended systems, but has specially adapted linear solvers so that it is still quite fast.

An interactive package written in C++ is content, where the Moore–Penrose continuation was first implemented. It supports bifurcation analysis of equilibria up to the continuation of codim 2 and detection of some codim 3 bifurcations. It uses a similar procedure as auto to continue limit cycles and detects codim 1 bifurcations of limit cycles, but does not continue these. It also handles bifurcations of cycles of maps up to the detection of codim 2 bifurcations. Normal forms for all codim 1 bifurcations are computed. Interestingly, both fully and minimally extended systems are implemented.

A new project matcont emerged out of content. It is written in matlab, a widely used software tool in modeling. In contrast to auto it uses Moore–Penrose continuation and minimally extended systems to compute equilibria, cycles and connecting orbits. Since matlab is an interpreted language, it is slower than the other packages, although a considerable speedup is obtained as the code for the Jacobi matrices for limit cycles and their bifurcations are written in C-codes and compiled. MATCONT has, however, much more functionality. It computes normal form coefficients up to codim 2 bifurcations of equilibria [20] and for codim 1 of limit cycles normal form coefficients are computed using a BVP-algorithm [28]. A recent addition is to switch branches from codim 2 bifurcation points of equilibria to codim 1 bifurcations of cycles [61].

Finally pydstool supports similar functionality for ODEs as content.

Future Directions

This review focuses on methods for equilibria and limit cycles to gain insight into the dynamics of a nonlinear ODE depending on parameters, but, generally speaking, other characteristics play a role too. For instance, a Neimark–Sacker bifurcation leads to the presence of tori. The computation and continuation of higher dimensional tori has been considered in [49,70,71]. The generalization of the methods for equilibria and cycles is, however, not straightforward. Stable and unstable manifolds are another aspect. In particular their visualization can hint at the presence of global bifurcations. A review of methods for computing such manifolds is presented in [52]. Connecting orbits can be calculated, but initializing such a continuation is a nontrivial task. This may be started from certain codim 2 bifurcation points as in [8], but good initial approximations are not available for other cases.

This review is also restricted to ODEs, but one can define other classes of dynamical systems. For instance, if the system is given by an explicitly defined map, i. e., not implicitly as a Poincaré map, the described approach can also be carried out [10,37,40,59]. Another important and related class is given by delay equations, and the algorithms for equilibria and periodic orbits in the ODE case can be applied with suitable modifications, see [5,74] and the software implementations dde‐biftool [31] and pdde-cont. The computation of normal forms and connecting orbits for this class is not yet thoroughly investigated or supported.

For large systems, e. g., discretizations of a partial differential equation (PDE), the algebra for some algorithms in this review becomes quite involved and numerically expensive. These problems need special treatment, see loca [69], pde-cont [30,62] and related references. These packages focus on computing (periodic) solutions and bifurcation curves. Good algorithms for analyzing bifurcations of PDEs will be a major research topic.

Finally, there are also slow-fast (stiff) systems or systems with a particular structure such as symmetries. For these classes, many questions remain open.

While on most of the mentioned topics pioneering work has been done, the methods are far from as “complete” as for ODEs. One overview of current research topics in dynamical systems can be found in [53].