1 Introduction and statement of the main results

One of the main problems about the differential systems in \(\mathbb {C}^2\), and in particular for the Liénard differential systems

$$\begin{aligned} \dot{x}=y+F(x), \quad \dot{y}=x, \end{aligned}$$
(2)

with the function F(x) analytic, is to know when they are integrable or not. If the function F satisfies \(F(0)=F'(0)=0\), then the eigenvalues of the linear part of system (2) at the singular point located at the origin of coordinates are \(\pm 1\), and consequently the origin is a weak saddle. Recall that a saddle is weak if its eigenvalues are \(\pm \lambda \) with \(0\ne \lambda \in \mathbb {R}\), and a saddle is strong when its eigenvalues are \(\lambda _1,\lambda _2\in \mathbb {R}\), \(\lambda _1<0<\lambda _2\) and \(\lambda _2\ne -\lambda _1\).

The vector field associated to the Liénard differential system (2) is

$$\begin{aligned} \mathcal {X}=\displaystyle {(y+F(x))\frac{\partial }{\partial x}+x\frac{\partial }{\partial y}}. \end{aligned}$$

We recall that the function \(H=H(x,y)\) is a first integral of system (2) in an open subset U or \(\mathbb {C}^2\) if

$$\begin{aligned} \mathcal {X}H=\displaystyle {(y+F(x))\frac{\partial H}{\partial x}+x\frac{\partial H}{\partial y}}=0 \quad \text{ on } \text{ the } \text{ points } \text{ of } \text{ U. } \end{aligned}$$
(3)

From Theorem 1 of Gasull and Giné [4] it follows the next result.

Theorem 1

The Liénard analytic differential system

$$\begin{aligned} \dot{x}= y + F(x), \qquad \dot{y}= b\, x, \qquad \text{ with } 0 \ne b\in \mathbb {C}, \end{aligned}$$
(4)

and \(F(x)=\sum _{j\ge 2} a_j x^j\), is locally integrable at the origin if and only if F(x) is an even polynomial (i.e. \(F(-x)=F(x)\)) .

Note that in Theorem 1 as already noticed by the authors the origin is a weak saddle.

Theorem 1 extends to \(\mathbb {C}^2\) and for all non–zero complex number b the well known results on the existence of a local first integral in a neighborhood of the origin for a polynomial Liénard differential (4) in \(\mathbb {R}^2\) having at the origin a center (i.e. \(b=-1\) obtained by Poincaré [10, 11]), or a weak saddle (i.e. \(b=1\), see [1, 12, 16]).

In all the paper \(\mathbb {Z}^+\) and \(\mathbb {Q}^+\) denote the sets of non-negative integer numbers and non-negative rational numbers, respectively. Consider analytic differential systems in \(\mathbb {C}^2\) of the form

$$\begin{aligned} \dot{u}=\lambda \, u + \cdots , \qquad \dot{v}= - \mu \, v + \cdots , \end{aligned}$$
(5)

where \(\lambda \) and \(\mu \) are non–zero complex numbers. In (5) the dots \(\cdots \) denotes nonlinear terms. From Poincaré [10, 11] and Furta [3] we know that a necessary condition for the existence of an analytic first integral in a neighborhood of the origin of system (2) is that \(\lambda /\mu =p/q\in \mathbb {Q}^+\setminus \{0\}\) with \(\gcd (p,q)=1\). When \(\lambda \) and \(\mu \) satisfies this condition we say that the origin is in \([p:-q]\)resonance.

A \([p:-q]\) resonant differential system (5) after a scaling of time if necessary can be written as

$$\begin{aligned} \dot{u}=p \, u + \cdots , \qquad \dot{v}= - q \, v + \cdots , \end{aligned}$$
(6)

with \(p,q \in \mathbb {Z^+}\setminus \{0\}\). The next result follows from Theorem 4 of [5].

Theorem 2

The Liénard analytic differential system (2) with a strong saddle at the origin can be transformed into a system with a \([p:-q]\) resonant saddle at the origin.

The study of the existence or not of a first integral in a neighborhood of a \([p:-q]\) resonant saddle is a difficult problem, see for instance [2, 7, 8, 13,14,15] and the references quoted there. Hence Theorem 2 says that the study of the existence or not of a first integral in a neighborhood of a strong saddle for the Liénard differential system (2) is also difficult.

When a planar differential system has a (local) first integral we say that it is (locally) integrable. In [4] the authors left open the following problem (see the last sentence of their paper):

Open Theorem

We do not know if there are nonlinear integrable cases in systems (2).

Later on in [5] appears explicitly the following:

Conjecture

The unique integrable case of the Liénard system (2) is the linear one.

We remark that this conjecture is made for Liénard analytic differential systems having a strong saddle at the origin.

The objective of this note is to prove the previous conjecture restricted to polynomial first integrals and restricted to Liénard polynomial differential systems (2), i.e. the function F(x) is polynomial. Thus our first main result is:

Theorem 3

If a Liénard analytic differential system (2) has a local analytic first integral defined in a neighborhood of the origin, then

$$\begin{aligned} a=F'(0)=\pm \frac{k_1-k_2}{\sqrt{k_1k_2}}, \end{aligned}$$
(7)

where \(a\ne 0\) and \(k_1\) and \(k_2\) are coprime positive integers.

The proof of Theorem 3 is given in Sect. 2. Note that since we are interested in systems that are integrable, we must have a satisfying (7).

When \(a=F'(0)\) does not satisfy (7) the analytical integrability of the Liénard analytic differential system has been studied in [9].

Theorem 4

If a Liénard analytic differential system (2) with a as in (7) has a polynomial first integral, then the degree of the polynomial F(x) must be one, i.e, \(F(x)=a x\), and the polynomial first integral H is

$$\begin{aligned} H= {\left\{ \begin{array}{ll} (\sqrt{k_2} x -\sqrt{k_1} y)^{k_1} (\sqrt{k_1} x +\sqrt{k_2} y)^{k_2} &{} \text {if }a =(k_1-k_2)/\sqrt{k_1k_2}, \\ (\sqrt{k_2} x +\sqrt{k_1} y)^{k_1} (\sqrt{k_1} x -\sqrt{k_2} y)^{k_2} &{} \text {if }a =(k_2-k_1)/\sqrt{k_1k_2}. \end{array}\right. } \end{aligned}$$

Note that Theorem 4 proves the conjecture restricted to polynomial first integrals.

The next result proves the conjecture.

Theorem 5

If a Liénard analytic differential system (2) with a as in (7) has an analytic first integral defined in a neighborhood of the origin, then the degree of the polynomial F(x) must be one, i.e, \(F(x)=a x\), and the polynomial first integral H is the one given in Theorem 4.

Theorem 4 is proved in Sect. 3, while Theorem 5 is proved in Sect. 4.

2 Proof of Theorem 3

Before proving Theorem 3 we recall the following result whose proof can be found in [3, 6, 10, 11].

Theorem 6

Assume that the eigenvalues \(\lambda _1\) and \(\lambda _2\) of the Jacobian matrix of system (2) at the singular point (0, 0) do not satisfy the condition

$$\begin{aligned} k_1 \lambda _1 + k_2 \lambda _2=0, \quad k_1,k_2 \in \mathbb {Z}^+, \quad k_1 +k_2 >0, \end{aligned}$$

then system (2) has no local analytic first integral defined in a neighborhood of the origin.

We first note that the origin is the unique singular point of system (2) and that the eigenvalues of the Jacobian matrix at this point satisfy

$$\begin{aligned} \lambda ^2 -a \lambda -1=0, \ \text {that is} \ \lambda _1= \frac{a + \sqrt{a^2+4}}{2}>0, \quad \lambda _2= \frac{a - \sqrt{a^2+4}}{2}<0. \end{aligned}$$

So we have that \(\lambda _1\lambda _2=-1\), yielding \(\lambda _2=-1/\lambda _1\). Moreover, since by assumptions the system has a local analytic first integral in a neighborhood of the origin, in view of Theorem 6 we must have that

$$\begin{aligned} 0=k_1 \lambda _1 + k_2 \lambda _2= k_1 \lambda _1 -\frac{k_2}{\lambda _1} =\frac{k_1 \lambda _1^2 -k_2}{\lambda _1}, \end{aligned}$$

with \(k_1,k_2\in \mathbb {Z}^+\) such that \(k_1+k_2 > 0\). So,

$$\begin{aligned} \lambda _1 =\sqrt{\frac{k_2}{k_1}}, \quad \lambda _2=-\sqrt{\frac{k_1}{k_2}}. \end{aligned}$$

Note that \(k_1 ,k_2 \in \mathbb {Z}^+ \setminus \{0\}\) because \(\lambda _1\) and \(\lambda _2\) are not zero. Therefore

$$\begin{aligned} \frac{a + \sqrt{a^2+4}}{2}= \sqrt{\frac{k_2}{k_1}}, \quad \frac{a - \sqrt{a^2+4}}{2}=-\sqrt{\frac{k_1}{k_2}}, \end{aligned}$$

or equivalently

$$\begin{aligned} \frac{a^2 +a^2+4 + 2 a \sqrt{a^2+4}}{4}=\frac{k_2}{k_1}, \quad \frac{a^2 +a^2+4 - 2 a \sqrt{a^2+4}}{4}=\frac{k_1}{k_2}. \end{aligned}$$

Hence,

$$\begin{aligned} \frac{k_1}{k_2} = \frac{k_2}{k_1} -a \sqrt{a^2+4}, \end{aligned}$$

that is

$$\begin{aligned} a \sqrt{a^2+4} = \frac{k_2^2-k_1^2}{k_1 k_2}. \end{aligned}$$

Hence

$$\begin{aligned} a=\pm \frac{k_1-k_2}{\sqrt{k_1k_2}}, \quad k_1,k_2 \in \mathbb {Z}^+\setminus \{0\}. \end{aligned}$$

Moreover, \(k_1\) and \(k_2\) are different, otherwise \(a=0\) which is not possible because the origin would be a weak saddle. Finally, we observe that \(k_1\) and \(k_2\) are coprime. Otherwise setting \(k_1=\text {g.c.d}\{k_1,k_2\} \hat{k}_1\) and \(k_2=\text {g.c.d}\{k_1,k_2\} \hat{k}_2\) we get

$$\begin{aligned} a=\pm \frac{\text {g.c.d}\{k_1,k_2\}(\hat{k}_1-\hat{k}_2)}{\sqrt{(\text {g.c.d} \{k_1,k_2\})^2 \hat{k}_1 \hat{k}_2}}=\pm \frac{\hat{k}_1-\hat{k}_2}{\sqrt{ \hat{k}_1 \hat{k}_2}}. \end{aligned}$$

This completes the proof of Theorem 3.

3 Proof of Theorem 4

Without loss of generality we may write the polynomial first integral \(H=H(x,y)\) as

$$\begin{aligned} H=g_0(x)y^n+g_1(x)y^{n-1}+\cdots +g_{n-1}(x)y+g_n(x), \end{aligned}$$

where the \(g_i(x)\) for \(i=0,\ldots ,d\) are polynomials, and \(g_0(x)\) is not the zero polynomial. Substituting H into (3) we get

$$\begin{aligned} \mathcal {X}h= & {} (y+F(x))\Big (g_0'y^n+g_1'y^{n-1}+\cdots + g_{n-1}'y+ g_n'\Big )\\&\quad + x\Big (n g_0 y^{n-1}+(n-1)g_1y^{n-2}+\cdots +2g_{n-2}y+ g_{n-1}\Big )=0, \end{aligned}$$

where the prime denotes derivative with respect to the variable x. Now we rewrite this equality as

$$\begin{aligned}&g_0'y^{n+1}+(g_0'F+g_1')y^n+(g_1'F+g_2'+n g_0 x)y^{n-1}+ \cdots \\&\quad +(g_{n-1}'+g_{n-2}'F+3g_{n-3}x)y^2 \\&\quad +(g_n'+g_{n-1}'F+2g_{n-2}x)y+(g_n'F+g_{n-1}x)=0. \end{aligned}$$

Since all coefficients of the previous polynomial in the variable y must be zero, we get the following system of differential equations

$$\begin{aligned} \begin{array}{cc} g'_0=0, &{} g'_1=0,\\ g_0=-\displaystyle {\frac{g'_2}{nx}}, &{} g_1=\displaystyle {\frac{-Fg'_2-g'_3}{(n-1)x}},\\ g_2=\displaystyle {\frac{-Fg'_3-g'_4}{(n-2)x}}, &{} g_3=\displaystyle {\frac{-Fg'_4-g'_5}{(n-3)x}},\\ \vdots &{} \vdots \\ g_{n-2}=\displaystyle {\frac{-Fg'_{n-1}-g'_n}{2x}}, &{} g_{n-1}=\displaystyle {\frac{-Fg'_n}{x}}. \end{array} \end{aligned}$$
(8)

From the first two equations of (8) we get that \(g_0\) and \(g_1\) are constants, and additionally by assumptions we have that \(g_0\ne 0\). From the third equation of (8) we obtain that \(g_2(x)\) is a polynomial of degree 2.

From the fourth equation, since \(g_1,g_2\) and \(g_3\) are polynomials we get that F must be a polynomial. Assume that the degree of the polynomial F is \(d\ge 1\), then from the fourth equation of (8) it follows that the degree of the polynomial \(g_3\) is \(d+2\). Now from the fifth equation of (8) we get that the degree of the polynomial \(g_4\) is \(2d+2\), and from the sixth we obtain that the degree of the polynomial \(g_5\) is \(3d+2\).

Thus recursively we have that the degree of the polynomial \(g_k\) for \(k=2,\ldots ,n\) is \((k-2)d+2\). From the last equation of (8) we obtain that the degree \(1+(n-3)d+2\) of the polynomial \(xg_{n-1}\) must be equal to the degree \(d+(n-2)d+1\) of the polynomial \(Fg'_n\), but this equality is only possible if \(d=1\).

It is easy to check that the Liénard analytic differential system (2) of degree 1, i.e.

$$\begin{aligned} \dot{x}= y+ax,\qquad \dot{y}=x, \end{aligned}$$

with a as in (7) has the polynomial first integral H as in the statement of the theorem. This completes the proof of Theorem 4.

4 Proof of Theorem 5

Consider system (2) with one of the conditions given by Theorem 3, namely the coefficient a of x in F(x) is equal to \((k_1-k_2)/\sqrt{k_1k_2}\) (the case in which \(a=-(k_1-k_2)/\sqrt{k_1k_2}\) follows in the same way). Then

$$\begin{aligned} F(x)=\frac{k_1-k_2}{\sqrt{k_1k_2}}x+ \sum _{j=2}^{\infty }a_j x^j. \end{aligned}$$

If \(a_j=0\) for \(j \ge 2\) it follows from Theorem 4 that system (2) has a polynomial first integral. Therefore we assume, first that \(a_j \ne 0\) for some \(j \ge 2\), and second that system (2) has an analytic first integral H defined in a neighborhood of the origin, and we will reach a contradiction.

Under the assumptions on F we have

$$\begin{aligned} \begin{aligned} \dot{x}&= y + \frac{k_1-k_2}{\sqrt{k_1k_2}} x + \sum _{j=2}^\infty a_j x^{j}, \\ \dot{y}&=x. \end{aligned} \end{aligned}$$
(9)

Making the change of variables

$$\begin{aligned} u=\sqrt{k_1}\, x + \sqrt{k_2}\, y, \quad v=\sqrt{k_2}\, x -\sqrt{k_1}\, y, \end{aligned}$$
(10)

with inverse change

$$\begin{aligned} x=\frac{\sqrt{k_1}\, u +\sqrt{k_2}\, v}{k_1+k_2}, \quad y= \frac{\sqrt{k_2}\, u -\sqrt{k_1}\, v}{k_1+k_2} \end{aligned}$$

and the rescaling of the time \(t=\sqrt{k_1k_2}\, T\), we have that system (9) becomes

$$\begin{aligned} \begin{aligned} u'&= k_1 u +k_1 \sqrt{k_2} \sum _{j=2}^\infty a_j \Big (\frac{\sqrt{k_1}\, u +\sqrt{k_2}\, v}{k_1+k_2}\Big )^j, \\ v'&= -k_2 v +k_2 \sqrt{k_1} \sum _{j=2}^\infty a_j \Big (\frac{\sqrt{k_1}\, u +\sqrt{k_2}\, v}{k_1+k_2}\Big )^j, \end{aligned} \end{aligned}$$
(11)

where the prime denotes derivative in the new variable T.

If \(k_1 > k_2\) (and so \(k_1 > 1\)), we change from the variables (uv) to the variables (uz) where \(z= u^{k_2} v^{k_1}\) and so \(v=z^{1/k_1} u^{-k_2/k_1}\).

If \(k_2 > k_1\) (and so \(k_2 > 1\)), we change from the variables (uv) to (zv) where \(z= u^{k_2} v^{k_1}\) and so \(u=z^{1/k_2} v^{-k_1/k_2}\).

From now on we assume that \(k_1 > k_2\) because the other case is done in a similar manner. Hence we take

$$\begin{aligned} z= u^{k_2} v^{k_1} \quad \text {that is} \quad v=z^{1/k_1} u^{-k_2/k_1}. \end{aligned}$$
(12)

Then from (11) we have

$$\begin{aligned} \begin{aligned} u'&= k_1 u +k_1 \sqrt{k_2} \sum _{j=2}^\infty a_j \Big (\frac{\sqrt{k_1} u +\sqrt{k_2}z^{1/k_1} u^{-k_2/k_1} }{k_1+k_2}\Big )^j, \\ z'&= k_1 k_2 u^{(k_2-k_1)/k_1} z^{(k_1-1)/k_1} (\sqrt{k_1} u+\sqrt{k_2} z^{1/k_1}u^{-k_2/k_1} ) \\&\phantom {\le } \cdot \sum _{j=2}^\infty a_j \Big (\frac{\sqrt{k_1} u +\sqrt{k_2}z^{1/k_1} u^{-k_2/k_1} }{k_1+k_2}\Big )^j. \end{aligned} \end{aligned}$$
(13)

We write H(xy) as a formal first integral of system (9). Then \(\hat{H}(u,v)=H(x,y)\) is a formal first integral of system (11) and \(\tilde{H}(u,z)=\hat{H}(u,v)\) is a formal first integral of system (13). Writing \(\hat{H}(u,v)=\sum _{j \ge 0} H_j(u) v^j\) with \(H_j\) a formal series in u, we can write \(\tilde{H}(u,z)\) as

$$\begin{aligned} \tilde{H}=\tilde{H}(u,z)=\sum _{j \ge 0} \tilde{H}_j(u) z^{j/k_1}, \end{aligned}$$

where \(\tilde{H}_j(u)=H_j(u) u^{-j k_2/k_1}\). Since \(\tilde{H}\) is a first integral we can assume that it has no constant term. Note that \(\tilde{H}\) satisfies

$$\begin{aligned} u' \frac{\partial \tilde{H}}{\partial u} + z' \frac{\partial \tilde{H}}{\partial z}=0, \end{aligned}$$
(14)

with \((u',z')\) as in (13). We will show by induction that

$$\begin{aligned} \tilde{H}_j(u)=0 \quad \text {for }j \ge 0. \end{aligned}$$
(15)

Note that to conclude the proof of the theorem it is enough to show that (15) holds, because in this case we reach to a contradiction.

First note that Eq. (14) restricted to \(z=0\) becomes

$$\begin{aligned} \bigg (k_1 u +k_1 \sqrt{k_2} \sum _{j=2}^\infty a_j \Big (\frac{\sqrt{k_1} u}{k_1 +k_2} \Big )^j \bigg ) \tilde{H}_0'(u)=0, \end{aligned}$$

where the prime denotes derivative with respect to the variable u. Thus \(\tilde{H}_0\) is a constant. Since \(\tilde{H}_0\) has no constant terms we get \(\tilde{H}_0=0\). This proves (15) for \(j=0\).

We assume that (15) is satisfied for \(j=0,\ldots ,n-1\) with \(n\ge 1\) and we shall prove it for \(j=n\). By the induction hypothesis we have that

$$\begin{aligned} \tilde{H} =\sum _{j \ge 0} \tilde{H}_{j+n} (u) z^{(j+n)/k_1} =z^{n/k_1} g(u,z), \end{aligned}$$

with \(g(u,0)=\tilde{H}_n(u)\). Now after simplifying Eq. (14) by \(z^{n/k_1}\), and after restricting to \(z=0\), Eq. (14) becomes

$$\begin{aligned} \bigg (k_1 u +k_1 \sqrt{k_2} \sum _{j=2}^\infty a_j \Big (\frac{\sqrt{k_1} u}{k_1 +k_2} \Big )^j \bigg ) \tilde{H}_n'(u)=0. \end{aligned}$$

Therefore \(\tilde{H}_n(u)=0\). This proves (15) for \(j=n\). In short, the theorem is proved.