1 Introduction and statement of the main result

In the qualitative theory of differential equations, it is important to know whether a given differential system is chaotic or not. One might think that it is not possible to generate a chaotic system without equilibrium points. The answer to this question was given by Chen and Wang [21] where the authors introduce the following polynomial differential system in \(\mathbb {R}^3\)

$$\begin{aligned} \dot{x}\,=\,y, \quad \dot{y}\,=\,z, \quad \dot{z}\,=\,-y - x^2 - x z + 3 y^2 + a,\nonumber \\ \end{aligned}$$
(1)

where \(a\in \mathbb {R}\) is a parameter. They observe that when \(a>0\) system (1) has two equilibria \((\pm \sqrt{a},0,0)\), when \(a=0\) the two equilibria collide at the origin \((0,0,0)\) and for \(a<0\) system (1) has no equilibria but still generates a chaotic attractor, see for more details again [21]. The Chen–Wang [21] differential system is relevant, because it seems that it is the first example of a differential system in \(\mathbb {R}^3\) which exhibits chaotic motion and has no equilibria, as the authors of claimed.

The first objective of this paper is to study the integrability of system (1). We recall that the existence of a first integral for a differential system in \(\mathbb {R}^3\) allows to reduce its study in one dimension. This is the main reason to look for first integrals. Moreover, inside the first integrals, the simpler ones are the so called Darboux first integrals, for more information about these kind of first integrals see for instance [8, 1214] and the Sect. 2.1.

The second objective is to study the zero-Hopf bifurcation which exhibits the polynomial differential system (1). The main tool up to now for studying a zero-Hopf bifurcation is to pass the system to the normal form of a zero-Hopf bifurcation, later on in this introduction we provide references about this. Our analysis of the zero-Hopf bifurcation is different; we study them directly using the averaging theory, see the Sect. 2.2 for a summary of the results of this theory that we need in this paper.

Let \(U\) be an open and dense subset of \(\mathbb R^3\), we say that a non-locally constant \(C^1\) function \(H :U \rightarrow \mathbb R\) is a first integral of system (1) on \(U\) if \(H(x(t), y(t),z(t))\) is constant for all of the values of \(t\) such for which \((x(t),y(t),z(t))\) is a solution of system (1) contained in \(U\). Obviously, \(H\) is a first integral of system (1) if and only if

$$\begin{aligned} y \frac{\partial H}{\partial x} + z \frac{\partial H}{\partial y}+ (-y - x^2 - x z + 3 y^2 + a)\frac{\partial H}{\partial z} = 0, \end{aligned}$$

for all \((x,y,z)\in U\).

The first main result of this paper is:

Theorem 1

The following statements hold.

  1. (a)

    System (1) has neither invariant algebraic surfaces, nor polynomial first integrals.

  2. (b)

    All the exponential factors of system (1) are \(\exp (x)\), \(\exp (y)\) and linear combinations of these two. Moreover, the cofactors of \(\exp (x)\) and \(\exp (y)\) are \(y\) and \(z\), respectively.

  3. (c)

    System (1) has no Darboux first integrals.

Theorem 1 is proved in Sect. 4. See Sect. 2 for the definition of invariant algebraic surface, exponential factor, and Darboux first integral.

The second main objective of this paper is to show that system (1) exhibits two small amplitude periodic solutions for \(a>0\) sufficiently small that bifurcate from a zero-Hopf equilibrium point localized at the origin of coordinates when \(a=0\).

We recall that an equilibrium point is a zero-Hopf equilibrium of a \(3\)-dimensional autonomous differential system, if it has a zero real eigenvalue and a pair of purely imaginary eigenvalues. We know that generically a zero-Hopf bifurcation is a two-parameter unfolding (or family) of a \(3\)-dimensional autonomous differential system with a zero-Hopf equilibrium. The unfolding can exhibit different topological type of dynamics in the small neighborhood of this isolated equilibrium as the two parameters vary in a small neighborhood of the origin. This theory of zero-Hopf bifurcation has been analyzed by Guckenheimer, Han, Holmes, Kuznetsov, Marsden, and Scheurle in [9, 10, 15, 16, 19]. In particular, they show that some complicated invariant sets of the unfolding could bifurcate from the isolated zero-Hopf equilibrium under convenient conditions, showing that in some cases the zero-Hopf bifurcation could imply a local birth of “chaos”, see for instance the articles [24, 7, 19] of Baldomá and Seara, Broer and Vegter, Champneys and Kirk, Scheurle, and Marsden.

Note that the differential system (1) only depends on one parameter so it cannot exhibit a complete unfolding of a zero-Hopf bifurcation. For studying the zero-Hopf bifurcation of system (1), we shall use the averaging theory in a similar way at it was used in [5] by Castellanos, Llibre and Quilantán.

In the next result, we characterize when the equilibrium points of system (1) are zero-Hopf equilibria.

Proposition 1

The differential system (1) has a unique zero-Hopf equilibrium localized at the origin of coordinates when \(a=0\).

The second main result of this paper characterizes the Hopf bifurcation of system (1). For a precise definition of a classical Hopf bifurcation in \(\mathbb {R}^3\) when a pair of complex conjugate eigenvalues cross the imaginary axis and the third real eigenvalue is not zero, see for instance [17].

Theorem 2

The following statements hold for the differential system (1).

  1. (a)

    This system has no classical Hopf bifurcations.

  2. (b)

    This system has a zero-Hopf bifurcation at the equilibrium point localized at the origin of coordinates when \(a=0\) producing two small periodic solutions for \(a>0\) sufficiently small of the form

    $$\begin{aligned}&x(t)= \pm \sqrt{a}+ O(a), \quad y(t)= O(a), \\&z(t)= O(a). \end{aligned}$$

    Both periodic solutions have two invariant manifolds, one stable and one unstable, each of them formed by two cylinders. See Fig. 1 for the zero-Hopf periodic solution with initial conditions near \((\sqrt{a},a,a)\) with \(a=1/10{,}000\). The other Hopf periodic orbit is symmetric of this under the symmetry \((x,y,z,t)\rightarrow (-x,y,-z,-t)\) which leaves the differential system (1) invariant.

Fig. 1
figure 1

The Hopf periodic orbit for with initial conditions near \((\root a \of {,}a,a)\) for \(a=1/10{,}000\)

The paper is organized as follows. In Sect. 2, we present the basic definitions and results necessary to prove Theorems 1 and 2. In Sect. 2, we prove Theorem 1, and in Sect. 4, we present the proof of Proposition 1 and Theorem 2.

2 Preliminaries

2.1 Darboux theory of integrability

As usual \(\mathbb C[x,y,z]\) denotes the ring of polynomial functions in the variables \(x,y\) and \(z\). Given \(f \in \mathbb C[x, y,z]\setminus \mathbb {C}\), we say that the surface \(f(x,y,z)=0\) is an invariant algebraic surface of system (1) if there exists \(k \in \mathbb C[x, y,z]\) such that

$$\begin{aligned} y \frac{\partial f}{\partial x} + z \frac{\partial f}{\partial y}+ (-y - x^2 - x z + 3 y^2 + a)\frac{\partial f}{\partial z} = k f\nonumber \\. \end{aligned}$$
(2)

The polynomial \(k\) is called the cofactor of the invariant algebraic surface \(f=0\), and it has degree at most \(1\). When \(k=0\), \(f\) is a polynomial first integral. When a real polynomial differential system has a complex invariant algebraic surface, then it has also its conjugate. It is important to consider the complex invariant algebraic surfaces of the real polynomial differential systems because sometimes these forces the real integrability of the system.

Let \(f,g \in \mathbb C[x,y,z]\) and assume that \(f\) and \(g\) are relatively prime in the ring \(\mathbb C[x,y,z]\), or that \(g=1\). Then, the function \(\exp (f/g)\not \in \mathbb {C}\) is called an exponential factor of system (1) if for some polynomial \(L \in \mathbb C[x,y,z]\) of degree at most \(1\) we have

$$\begin{aligned}&y \frac{\partial \exp (f/g)}{\partial x} + z \frac{\partial \exp (f/g)}{\partial y} \nonumber \\&\qquad +\,(-y - x^2 - x z + 3 y^2 + a) \frac{\partial \exp (f/g)}{\partial z}\nonumber \\&\quad =L\, \exp (f/g). \end{aligned}$$
(3)

As before, we say that \(L\) is the cofactor of the exponential factor \(\exp {(f/g)}\). We observe that in the definition of exponential factor if \(f,g \in \mathbb C[x,y,z]\) then the exponential factor is a complex function. Again, when a real polynomial differential system has a complex exponential factor surface, then it has also its conjugate, and both are important for the existence of real first integrals of the system. The exponential factors are related with the multiplicity of the invariant algebraic surfaces, for more details see [6], Chapter 8 of [11], and [13, 14].

A first integral is called a Darboux first integral if it is a first integral of the form

$$\begin{aligned} f_1^{\lambda _1} \cdots f_p^{\lambda _p} F_1^{\mu _1}\cdots F_q^{\mu _q}, \end{aligned}$$

where \(f_i=0\) are invariant algebraic surfaces of system (1) for \(i=1,\ldots p\), and \(F_j\) are exponential factors of system (1) for \(j= 1, \ldots , q\), \(\lambda _i, \mu _j \in \mathbb C\).

The next result, proved in [11], explains how to find Darboux first integrals.

Proposition 2

Suppose that a polynomial system (1) of degree \(m\) admits \(p\) invariant algebraic surfaces \(f_i = 0\) with cofactors \(k_i\) for \(i = 1, \ldots ,p\) and \(q\) exponential factors \(\exp (g_j/h_j)\) with cofactors \(L_j\) for \(j = 1,\ldots ,q\). Then, there exist \(\lambda _i\) and \(\mu _j \in \mathbb C\) not all zero such that

$$\begin{aligned} \displaystyle {\sum _{i=1}^p \lambda _i K_i + \sum _{j=1}^q \mu _j L_j=0}, \end{aligned}$$
(4)

if and only if the function

$$\begin{aligned} f_1^{\lambda _1} \ldots f_p^{\lambda _p} \left( \exp \left( \frac{g_1}{h_1} \right) \right) ^{\mu _1} \ldots \left( \exp \left( \frac{g_q}{h_q}\right) \right) ^{\mu _q} \end{aligned}$$

is a Darboux first integral of system (1).

The following result whose proof is given in [13, 14] will be useful to prove statement (b) of Theorem 1.

Lemma 1

The following statements hold.

  1. (a)

    If \(\exp (f/g)\) is an exponential factor for the polynomial differential system (1) and \(g\) is not a constant polynomial, then \(g=0\) is an invariant algebraic surface.

  2. (b)

    Eventually, \(\exp (f)\) can be an exponential factor, coming from the multiplicity of the infinity invariant plane.

2.2 Averaging theory

We also present a result from the averaging theory that we shall need for proving Theorem 2, for a general introduction to the averaging theory see the book of Sanders, Verhulst and Murdock [18].

We consider the initial value problems

$$\begin{aligned} \dot{\mathbf {x}}=\varepsilon F_1(t,\mathbf {x})+\varepsilon ^2 F_2(t,\mathbf {x},\varepsilon ),\quad \mathbf {x}(0)=\mathbf {x}_0, \end{aligned}$$
(5)

and

$$\begin{aligned} \dot{\mathbf {y}}=\varepsilon g({\mathbf{y}}),\quad \mathbf {y}(0)=\mathbf {x}_0, \end{aligned}$$
(6)

with \(\mathbf {x}\) , \(\mathbf {y}\) and \(\mathbf {x}_0 \) in some open subset \(\Omega \) of \({\mathbb {R}^n}\), \(t\in [0,\infty )\), \(\varepsilon \in (0,\varepsilon _0]\). We assume that \(\mathbf {F_1}\) and \(\mathbf {F_2}\) are periodic of period T in the variable t, and we set

$$\begin{aligned} g(\mathbf {y})=\dfrac{1}{T}\int _{0}^{T}F_1(t,\mathbf {y})\hbox {d}t. \end{aligned}$$
(7)

We will also use the notation \(D_\mathbf {x}g\) for all the first derivatives of \(g\), and \(D_\mathbf {xx}g\) for all the second derivatives of \(g\).

For a proof of the next result, see [20].

Theorem 3

Assume that \(F_1\), \(D_\mathbf {x}F_1\) ,\(D_\mathbf {xx}F_1\) and \(D_\mathbf {x}F_2\) are continuous and bounded by a constant independent of \(\varepsilon \) in \([0,\infty )\times \ \Omega \times (0,\varepsilon _0]\), and that \(y(t)\in \Omega \) for \(t\in [0,1/\varepsilon ]\). Then, the following statements hold:

  1. 1.

    For \(t\in [0,1/\varepsilon ]\), we have \(\mathbf {x}(t)- \mathbf {y}(t)= O(\varepsilon )\) as \(\varepsilon \rightarrow 0\).

  2. 2.

    If \(p \ne 0\) is a singular point of system (6) such that

    $$\begin{aligned} detD_\mathbf {y}g(p) \ne 0, \end{aligned}$$
    (8)

    then there exists a periodic solution \(\mathbf {x}(t,\varepsilon )\) of period \(T\) for system (5) which is close to p and such that \(\mathbf {x}(0,\varepsilon )-p= O(\varepsilon )\) as \(\varepsilon \rightarrow 0\).

  3. 3.

    The stability of the periodic solution \(\mathbf {x}(t, \varepsilon )\) is given by the stability of the singular point.

3 Proof of Theorem 1

To prove Theorem 1(a), we state and prove two auxiliary results. As usual, we denote by \(\mathbb N\) the set of positive integers.

Lemma 2

If \(h=0\) is an invariant algebraic surface of system (1) with nonzero cofactor \(k\), then \(k = k_0- m x\) for some \(k_0 \in \mathbb C\) and \(m \in \mathbb N \cup \{0\}\).

Proof

Let \(h\) be an invariant algebraic surface of system (1) with nonzero cofactor \(k\). Then, \(k = k_0 + k_1 x + k_2 y + k_3 z\) for some \(k_0, k_1, k_2, k_3 \in \mathbb C\). Let \(n\) be the degree of \(h\). We write \(h\) as sum of its homogeneous parts as \(h = \sum _{i=1}^{n} h_i\) where each \(h_i\) is a homogenous polynomial of degree \(i\). Without loss of generality, we can assume that \(h_n \ne 0\) and \(n \ge 1\).

Computing the terms of degree \(n + 1\) in (2), we get that

$$\begin{aligned} \left( - x^2 - x z + 3 y^2\right) \frac{\partial h_n}{\partial z}= (k_1 x + k_2 y + k_3 z)h_n. \end{aligned}$$

Solving this linear partial differential equation, we get

$$\begin{aligned} h_n(x,y,z)&= C_n (x,y) \exp \left( -\frac{k_3 z}{x}\right) \\&\times (x (x+z)- 3 y^2)^{p(x,y)} , \end{aligned}$$

where \(C_n\) is an arbitrary function in the variables \(x\) and \(y\) and, \(p(x,y)=-k_1+k_3- \dfrac{k_2 y}{x}- \dfrac{3 k_3 y^2}{x^2}\). Since \(h_n\) must be a homogeneous polynomial we must have \(k_3=k_2=0\) and \(k_1=-m\) with \(m \in \mathbb N \cup \{0\}\). This concludes the proof of the lemma. \(\square \)

Lemma 3

Let \(g=g(x,y)\) be a homogeneous polynomial of degree \(n\) with \(n\ge 1\) satisfying

$$\begin{aligned} k_0 x g + (x^2- 3 y^2)\frac{\partial g}{\partial y} - x y \frac{\partial g}{\partial x}=0. \end{aligned}$$
(9)

Then, \(g=0\).

Proof

We write the homogeneous polynomial of degree \(n\) as \(g=g(x,y)= \sum _{i=0}^{n} a_i x^i y^{n-i}\) with \(a_i \in \mathbb C\). Using that \(g\) satisfies (9) we get

$$\begin{aligned}&k_0 \sum _{i=0}^{n} a_i x^{i+1} y^{n-i}+ \sum _{i=0}^{n} (n-i) a_i x^{i+2} y^{n-i-1} \nonumber \\&\quad - 3\sum _{i=0}^{n} (n-i) a_i x^i y^{n-i+1} - \sum _{i=0}^{n} i a_i x^i y^{n-i+1} =0. \nonumber \\ \end{aligned}$$
(10)

Computing the coefficients of \(x^j y^{n+1-j}\), for \(0 \le j \le n+1\) in Eq. (10), we get

$$\begin{aligned}&-3n a_0 =0,\nonumber \\&k_0 a_0 +(2-3n) a_1=0,\nonumber \\&(n-i+1)a_{i-1}+ k_0 a_i+(2i+2-3n)a_{i+1}=0, \nonumber \\&\vdots \nonumber \\&2 a_{n-2} + k_0 a_{n-1} -n a_{n}=0,\nonumber \\&a_{n-1}+ k_0 a_{n}=0. \end{aligned}$$
(11)

for \(1 \le i \le n-2\).

Taking into account that

$$\begin{aligned} 2i+2-3n \le 2(n-2)+2 -3n=-n-2<0, \end{aligned}$$

it follows recursively from (11) that \(a_i=0\) for all \(0\le i\le n\), so \(g=0\). \(\square \)

Proof of Theorem 1(a)

Let \(f=0\) be an invariant algebraic surface of degree \(n\ge 1\) of system (1) with cofactor \(k(x,y,z)=k_0 + k_1 x +k_2 y + k_3 z\). It follows from Lemma 2 that \(k=k(x,y,z)=k_0 - m x\), with \(m \in \mathbb N \cup \{0\}\). We write \(f\) as sum of its homogeneous parts as \(f=\sum _{i=0}^{n} f_i\) where \(f_i=f_i(x,y,z)\) is a homogeneous polynomial of degree \(i\).

Computing the terms of degree \(n + 1\) in (2), we get that

$$\begin{aligned} (- x^2 - x z + 3 y^2)\frac{\partial f_n}{\partial z}= -m x f_n. \end{aligned}$$

Solving this linear differential equation, we get that

$$\begin{aligned} f_n(x,y,z)= (x (x+z)- 3 y^2)^{m}g(x,y), \end{aligned}$$
(12)

where \(g=g(x,y)\) is a homogeneous polynomial of degree \(n-2m\) in the variables \(x\) and \(y\).

Assume first \(m=0\). Computing the terms of degree \(n\) in (2), we obtain

$$\begin{aligned}&( 3 y^2- x^2 - x z)\frac{\partial f_{n-1}}{\partial z} +\frac{\partial f_{n}}{\partial x} y + \frac{\partial f_{n}}{\partial y} z \nonumber \\&\quad + \frac{\partial f_{n}}{\partial z}(-y) = k_0 f_n. \end{aligned}$$
(13)

Solving (13), we have

$$\begin{aligned}&f_{n-1}(x,y,z)\nonumber \\&\quad = K(x,y)+ \frac{z}{x} \frac{\partial g}{\partial y}+ \frac{1}{x^2} \log [x(x+z)-3y^2] \nonumber \\&\qquad \times \Big (-k_0 x g - (x^2-3 y^2) \frac{\partial g}{\partial y}+xy \frac{\partial g}{\partial x}\Big ), \end{aligned}$$

where \(K\) is an arbitrary function in the variables \(x\) and \(y\). Since \(f_{n-1}\) must be a homogeneous polynomial of degree \(n-1\) we must have

$$\begin{aligned} -k_0 x g - (x^2-3 y^2) \frac{\partial g}{\partial y}+xy \frac{\partial g}{\partial x}=0. \end{aligned}$$

It follows from Lemma 3 that \(g=0\) and from (12) \(f_n=0\), i.e., \(f\) is a constant, which is a contradiction with the fact that \(f=0\) is an invariant algebraic surface. So \(m>0\).

For simplifying the computations, we introduce the weight change of variables

$$\begin{aligned} x=X, \,\,\, y=\mu ^{-1} Y, \,\,\, z= \mu ^{-1}Z, \,\,\, t= \mu T, \end{aligned}$$

with \(\mu \in \mathbb R{\setminus }\{0\}\). Then, system (1) becomes

$$\begin{aligned}&X'\,=\, Y, \quad Y'\,=\, \mu Z, \nonumber \\&Z'\,=\,3 Y^2 - \mu Y - \mu ^2 X^2-\mu X Z + \mu ^2 a, \end{aligned}$$
(14)

where the prime denotes derivative with respect to the variable \(T\).

Set

$$\begin{aligned} F(X,Y,Z)&= \mu ^n f(X, \mu ^{-1}Y, \mu ^{-1}Z)\\&= \sum _{i=1}^n \mu ^i F_i(X,Y,Z), \end{aligned}$$

where \(F_i\) is the weight homogeneous part with weight degree \(n-i\) of \(F\) and \(n\) is the weight degree of \(F\) with weight exponents \(s=(0,-1,-1)\). We also set \(K(X,Y,Z)=k(X, \mu ^{-1}Y, \mu ^{-1}Z)= k_0 - m X\).

From the definition of an invariant algebraic surface, we have

$$\begin{aligned}&(3 Y^2 -\mu Y -\mu ^2 X^2-\mu X Z + \mu ^2 a )\sum _{i=0}^{n} \mu ^{i} \frac{\partial F_i}{\partial Z} \nonumber \\&\qquad +\, Y \sum _{i=0}^{n} \mu ^{i} \frac{\partial F_i}{\partial X} + \mu Z \sum _{i=0}^n \mu ^{i} \frac{\partial F_i}{\partial Y}\nonumber \\&\quad = (k_0 - m X) \sum _{i=0}^{n} \mu ^{i} F_i. \end{aligned}$$
(15)

Equating in (15) the terms with \(\mu ^0\), we get

$$\begin{aligned} Y \frac{\partial F_0}{\partial X} + 3 Y^2 \frac{\partial F_0}{\partial Z}=(k_0 - m X)F_0, \end{aligned}$$
(16)

where \(F_0\) is a weight homogeneous polynomial of degree \(n\).

Solving (16) we readily obtain, by direct computation, that

$$\begin{aligned} F_0(X,Y,Z)=G(Y,Z)\exp \left( \frac{X ( 2 k_0 -m X)}{2Y}\right) , \end{aligned}$$

where \(G\) is an arbitrary function in the variables \(Y\) and \(Z\). Since \(F_0\) must be a polynomial and \(m> 0\) we must have \(F_0=0\). This implies that \(F=0\) is not an invariant algebraic surface of system (14); consequently, \(f=0\) is not an invariant algebraic surface of system (1). This completes the proof of Theorem 1(a). \(\square \)

Proof of Theorem 1(b)

Let \(E=\exp (f/g)\notin \mathbb {C}\) be an exponential factor of system (1) with cofactor \(L=L_0 + L_1 x + L_2 y + L_3 z\), where \(f,g \in \mathbb C[x,y,z]\) with \((f,g)=1\). From Theorem 1(a) and Lemma 1, \(E=\exp (f)\) with \(f=f(x,y,z) \in \mathbb C[x,y,z]\setminus \mathbb C\).

It follows from Eq. (3) that \(f\) satisfies

$$\begin{aligned}&(3 y^2 - y - x^2- x z + a) \frac{\partial f}{\partial z} + y \frac{\partial f}{\partial x} + z \frac{\partial f}{\partial y} \nonumber \\&\quad = L_0 + L_1 x + L_2 y + L_3 z, \end{aligned}$$
(17)

where we have simplified the common factor \(\exp (f)\).

We write \(f=\sum _{i=0}^n f_i(x,y,z)\), where \(f_i\) is a homogeneous polynomial of degree \(i\). Assume \(n>1\). Computing the terms of degree \(n+1\) in (17), we obtain

$$\begin{aligned} (- x^2 - x z + 3 y^2)\frac{\partial f_n}{\partial z}= 0. \end{aligned}$$

Solving it and using that \(f_n\) is a homogeneous polynomial of degree \(n\), we get \(f_n(x,y,z)=g_n(x,y)\), where \(g_n(x,y)\) is a homogeneous polynomial of degree \(n\). Computing the terms of degree \(n\) in (17), we obtain

$$\begin{aligned} (3y^2- x^2 - x z)\frac{\partial f_{n-1}}{\partial z} +\frac{\partial g_{n}}{\partial x} y + \frac{\partial g_{n}}{\partial y} z = 0. \end{aligned}$$
(18)

Solving (18), we get

$$\begin{aligned} f_{n-1}&= g_{n-1}(x,y)+\frac{z}{x}\frac{\partial g_n}{\partial y} \nonumber \\&+\frac{1}{x^2}\left( (3 y^2- x^2) \frac{\partial g_n}{\partial y}+ x y \frac{\partial g_n}{\partial x}\right) \nonumber \\&\times \log \left( x^2 +x z-3 y^2\right) , \end{aligned}$$

where \(g_{n-1}(x,y)\) is an arbitrary function in the variables \(x\) and \(y\). Since \(f_{n-1}\) must a homogeneous polynomial of degree \(n-1\) we must have

$$\begin{aligned} (3 y^2- x^2) \frac{\partial g_n}{\partial y}+ x y \frac{\partial g_n}{\partial x}=0, \end{aligned}$$

which yields

$$\begin{aligned} g_n = g_n\Big (\frac{2y^2-x^2}{2 x^6}\Big ). \end{aligned}$$

Taking into account that \(g_n\) must be a homogeneous polynomial of degree \(n\), we get \(g_n=0\). This implies that \(f_n=0\), so \(n=1\).

We can write \(f=a_1 x + a_2 y + a_3 z\) with \(a_i \in \mathbb {C}\). Imposing that \(f\) must satisfy (17), we get \(f=a_1 x + a_2 y\) with cofactor \(a_1 y+a_2 z\). This concludes the proof of Theorem 1(b). \(\square \)

Proof of Theorem 1(c)

It follows from Proposition 2 and statements (a) and (b) of Theorem 1 that if system (1) has a Darboux first integral then there exist \(\mu _1,\mu _2 \in \mathbb C\) not both zero such that (4) holds, that is, such that

$$\begin{aligned} \mu _1 y + \mu _2 z= 0. \end{aligned}$$

But this is not possible. In short, there are no Darboux first integrals for system (1) and the proof of Theorem 1(c) is completed. \(\square \)

4 Zero-Hopf bifurcation

In this section, we prove Proposition 1 and Theorem 2.

Proof of Proposition 1

System (1) has two equilibrium points \(e_\pm =(\pm \sqrt{a},0,0)\) when \(a>0\), which collide at the origin when \(a=0\). The proof is made computing directly the eigenvalues at each equilibrium point. Note that the characteristic polynomial of the linear part of system (1) at the equilibrium point \(e_\pm \) is

$$\begin{aligned} p(\lambda )= \lambda ^3 \pm \sqrt{a} \lambda ^2 + \lambda \pm 2 \sqrt{a}. \end{aligned}$$

As \(p(\lambda )\) is a polynomial of degree 3, it has either one, two (then one has multiplicity \(2\)), or three real zeros. Using the discriminant of \(p(\lambda )\), it follows that \(p(\lambda )\) has a unique real root, see the ”Appendix” for more details.

Imposing the condition

$$\begin{aligned} p(\lambda )=(\lambda -\rho )(\lambda ^2- \varepsilon - i\beta )(\lambda - \varepsilon + i\beta ) \end{aligned}$$

with \(\rho ,\varepsilon ,\beta \in \mathbb {R}\) and \(\beta >0\), we obtain a system of three equations that correspond to the coefficients of the terms of degree \(0, 1\) and \(2\) in \(\lambda \) of the polynomial \(p(\lambda )-(\lambda -\rho )(\lambda - \varepsilon - i\beta )(\lambda - \varepsilon + i\beta )\). This system has only two solutions in the variables \((a,\beta ,\rho )\), which are

$$\begin{aligned}&\left( \dfrac{1-24 \varepsilon ^2 + 32 \varepsilon ^4 - \sqrt{1- 32 \varepsilon ^2} + 8 \varepsilon ^2 \sqrt{1-32 \varepsilon ^2}}{8 \varepsilon ^2}, \right. \\&\left. \qquad \dfrac{\sqrt{3-2 \varepsilon ^2 - \sqrt{1-32 \varepsilon ^2}}}{\sqrt{2}}, \dfrac{-1 + \sqrt{1- 32 \varepsilon ^2}}{4 \varepsilon } \right) \\&\quad = (4\varepsilon ^2+O(\varepsilon ^4),1+O(\varepsilon ^2),-4\varepsilon +O(\varepsilon ^3)), \end{aligned}$$

and

$$\begin{aligned}&\left( \dfrac{1-24 \varepsilon ^2 + 32 \varepsilon ^4 + \sqrt{1- 32 \varepsilon ^2} - 8 \varepsilon ^2 \sqrt{1-32 \varepsilon ^2}}{8 \varepsilon ^2}, \right. \\&\left. \qquad \dfrac{\sqrt{3-2 \varepsilon ^2 + \sqrt{1- 32 \varepsilon ^2}}}{\sqrt{2}}, \dfrac{-1 - \sqrt{1- 32 \varepsilon ^2}}{4 \varepsilon } \right) \\&\quad = \left( \dfrac{1}{4\varepsilon ^2}+O(1), \sqrt{2}+O(\varepsilon ^2),-\dfrac{1}{2\varepsilon }+O(\varepsilon )\right) . \end{aligned}$$

Clearly, at \(\varepsilon =0\), only the first solution is well defined and gives \((a,\beta ,\rho )=(0,1,0)\). Hence, there is a unique zero-Hopf equilibrium point when \(a=0\) at the origin of coordinates with eigenvalues \(0\) and \(\pm i\). This completes the proof of Proposition 1. \(\square \)

Proof of Theorem 2

It was proven in Proposition 1 that when \(a=0\) the origin is zero-Hopf equilibrium point. We want to study if from this equilibrium it bifurcates some periodic orbit moving the parameter \(a\) of the system. We shall use the averaging theory of first order described in Sect. 2 (see Theorem 3) for doing this study. But for applying this theory, there are three main steps that we must solve in order that the averaging theory can be applied for studying the periodic solutions of a differential system.

  1. Step 1

    Doing convenient changes of variables we must write the differential system (1) as a periodic differential system in the independent variable of the system, and the system must depend on a small parameter as it appears in the normal form (5) for applying the averaging theory. To find these changes of variables sometimes is the more difficult step.

  2. Step 2

    We must compute explicitly the integral (7) related with the periodic differential system in order to reduce the problem of finding periodic solutions to a problem of finding the zeros of a function \(g(\mathbf {y})\), see (7).

  3. Step 3

    We must compute explicitly the zeros of the mentioned function, in order to obtain periodic solutions of the initial differential system (1).

In order to find the changes of variables for doing the step 1 and write our differential system (1) in the normal form for applying the averaging theory, first we write the linear part at the origin of the differential system (1) when \(a=0\) into its real Jordan normal form, i.e., into the form

$$\begin{aligned} \begin{pmatrix} 0&{}\quad -1 &{}\quad 0 \\ 1 &{}\quad 0 &{}\quad 0\\ 0 &{}\quad 0 &{}\quad 0 \end{pmatrix}\!. \end{aligned}$$

To do this, we apply the linear change of variables

$$\begin{aligned} (x,y,z) \!\rightarrow \! (u,v,w), \text{ where } \quad x\!=\!-\!u\!+\!w, \, y\!=\!v,\, z\!=\!u. \end{aligned}$$

In the new variables \((u,v,w)\), the differential system (1) becomes

$$\begin{aligned}&\dot{u} = a - v + u w + 3 v^2 - w^2,\nonumber \\&\dot{v} = u,\nonumber \\&\dot{w} = a + u w + 3 v^2 - w^2. \end{aligned}$$
(19)

Now, we write the differential system (19) in cylindrical coordinates \((r,\theta , w)\) doing the change of variables

$$\begin{aligned} u=r \cos \theta , \quad v= r \sin \theta , \quad w=w, \end{aligned}$$

and system (19) becomes

$$\begin{aligned}&\dot{r} = \cos \theta (a - w^2 + r w \cos \theta + 3 r^2 \sin ^2 \theta ),\nonumber \\&\dot{\theta }=1+\frac{1}{r}(w^2-a) \sin \theta - w\cos \theta \sin \theta - 3 r \sin ^3 \theta ,\nonumber \\&\dot{w} = a - w^2 + r w \cos \theta + 3 r^2 \sin ^2 \theta . \end{aligned}$$
(20)

Now, we do a rescaling of the variables through the change of coordinates

$$\begin{aligned}&(r,\theta ,w) \rightarrow (R,\theta , W), \quad \text{ where } \quad r= \frac{\sqrt{a}}{2} R, \, \nonumber \\&w= \frac{\sqrt{a}}{2}W. \end{aligned}$$

After this, rescaling system (20) becomes

$$\begin{aligned}&\dot{R} = \dfrac{\sqrt{a}}{2} \cos \theta ( 4 - W^2 + R W \cos \theta + 3 R^2 \sin ^2 \theta ), \nonumber \\&\dot{\theta }= 1 - \dfrac{\sqrt{a}}{2R} \sin \theta ( 4 - W^2 + RW \cos \theta + 3R^2 \sin ^2 \theta ), \nonumber \\&\dot{W} = \dfrac{\sqrt{a}}{2} ( 4 - W^2 + R W \cos \theta + 3 R^2 \sin ^2 \theta ). \end{aligned}$$
(21)

This system can be written as

$$\begin{aligned}&\frac{\hbox {d}R}{\hbox {d}\theta } = \frac{\sqrt{a}}{2} F_{11}(\theta , R, W) + O(a), \nonumber \\&\frac{\hbox {d}W}{\hbox {d}\theta } = \frac{\sqrt{a}}{2} F_{12}(\theta , R, W) + O(a), \end{aligned}$$
(22)

where

$$\begin{aligned}&F_{11}(\theta , R, W)\!=\! \cos \theta (4 \!-\! W^2 \!+\! R W \cos \theta \!+\! 3 R^2 \sin ^2 \theta ),\\&F_{12}(\theta , R, W)= (4 - W^2 + R W \cos \theta + 3 R^2 \sin ^2 \theta ). \end{aligned}$$

Using the notation of the averaging theory described in Sect. 2, we have that if we take \(t=\theta \), \(T=2 \pi \), \(\varepsilon =\sqrt{a}\), \(\mathbf{x}=(R, W)^{T}\) and

$$\begin{aligned}&F_1(t,\mathbf{x})= F_{1}(\theta ,R, W)= \begin{pmatrix} F_{11}(\theta ,R, W)\\ F_{12}(\theta ,R, W)\end{pmatrix}, \\&\varepsilon ^2 F_2(t,\mathbf{x})= O(a), \end{aligned}$$

it is immediate to check that the differential system (22) is written in the normal form (5) for applying the averaging theory and that it satisfies the assumptions of Theorem 3. This completes the step 1.

Now, we compute the integral in (7) with \(\mathbf{y}= (R,W)^{T}\), and denoting

$$\begin{aligned} g(\mathbf{y})=g (R,W)= \begin{pmatrix} g_{11}(R, W)\\ g_{12}(R, W)\end{pmatrix}, \end{aligned}$$

we obtain

$$\begin{aligned} g_{11}(R,W)&= \dfrac{1}{4}R W,\\ g_{12}(R,W)&= \dfrac{1}{4} ( 8 + 3 R^2 - 2 W^2). \end{aligned}$$

So the step 2 is done.

The system \(g_{11}(R,W)=g_{12}(R,W)=0\) has the unique real solutions \((W, R)= (\pm 2, 0)\). The Jacobian (8) is

$$\begin{aligned} \left| \begin{array}{cc} \dfrac{1}{4} W &{} \dfrac{1}{4} R \\ \dfrac{3}{2} R &{} -W \end{array} \right| = -\dfrac{1}{8}(3 R^2 + 2W^2), \end{aligned}$$

and evaluated at the solutions \((R,W)=(0,\pm 2)\) takes the value \(-1\ne 0\). Then, by Theorem 3, it follows that for any \(a>0\) sufficiently small system (21) has a periodic solution \(\mathbf{x}(t,\varepsilon )=(R(\theta ,a), W(\theta , a))\) such that \((R(0, a), W(0,a))\) tends to \((0,\pm 2)\) when \(a\) tends to zero. We know that the eigenvalues of the Jacobian matrix at the solution \((0,- 2)\) are \(2,-1/2\) and the eigenvalues of the Jacobian matrix at the solution \((0,2)\) are \(-2,1/2\). This shows that both periodic orbits are unstable having a stable manifold and an unstable manifold both formed by two cylinders.

Going back to the differential system (20), we get that such a system for \(a>0\) sufficiently small has two periodic solutions of period approximately \(2\pi \) of the form

$$\begin{aligned}&r(\theta )=O(a), \\&w(\theta ) = \pm \sqrt{a}+ O(a). \end{aligned}$$

These two periodic solutions become for the differential system (19) into two periodic solutions of period also close to \(2\pi \) of the form

$$\begin{aligned}&u(t)= O(a), \\&v(t)= O(a), \\&w(t)= \pm \sqrt{a}+ O(a). \end{aligned}$$

for \(a>0\) sufficiently small. Finally, we get for the differential system (1) the two periodic solutions

$$\begin{aligned}&x(t)= \pm \sqrt{a}+ O(a), \\&y(t)= O(a), \\&z(t)= O(a). \end{aligned}$$

of period near \(2\pi \) when \(a>0\) is sufficiently small. Clearly, these periodic orbits tend to the origin of coordinates when \(a\) tends to zero. Therefore, they are small amplitude periodic solutions starting at the zero-Hopf equilibrium point. This concludes the proof of Theorem 2. \(\square \)