1 Introduction and Statement of the Main Results

Piecewise smooth differential systems have appeared in for example control theory, electronic circuits with switches and mechanical engineering with impact and dry frictions, see e.g. [3, 8, 10, 14, 15, 23] and the references therein. Their dynamics becomes an important subject to be studied. It has been attracting lots of mathematicians, physicists and engineers focusing on the study of their dynamics. In the past three decades the theory of piecewise smooth differential systems has been greatly developed, and there appeared many nice results, see e.g. [5, 7, 13, 16, 2125] and the references therein.

At the moment there are many tools and theories to investigate dynamics of piecewise smooth differential systems. Of which the averaging theory is a newly developed one. Whereas this theory is classical and powerful in the study of dynamics of nonlinear smooth dynamical systems. The first formalization of the averaging theory for smooth differential systems was provided by Fatou [9] in 1928. Its practical applications were made by Krylov and Bogoliubov [2] in 1930, and by Bogoliubov [1] in 1945, to study stationary oscillation of nonlinear mechanical systems and so on. Recent years, the averaging theory for smooth differential systems has been greatly developed, and there have appeared plenty of excellent results. Buica and Llibre [4] developed the averaging theory to study periodic orbits of Lipschitz continuous differential systems via the theory of Brouwer degree. Buică et al. [6] improved their theory to the case that the unperturbed system has a lower dimensional submanifold filled with periodic orbits. Giné et al. [11] presented an averaging theory of arbitrary order for one dimensional perturbed analytic differential equation. Llibre et al. [19] generalized the results in [11] to any finite dimensional smooth differential systems with the unperturbed differential systems having periodic orbits filled with a region. In the case that the unperturbed differential systems have periodic orbits filling only a lower dimensional submanifold, the averaging theory of arbitrary order was established only recently in [12].

Whereas the averaging theory of piecewise smooth differential system has a starting point only in recent years. Llibre et al. [20] developed a variation of the classical averaging method for detecting limit cycle of certain piecewise continuous dynamical systems. Llibre et al. [17] further consider the averaging theory of first and second order for studying periodic solutions of any finite dimensional piecewise smooth differential systems. We note that the averaging theories in [17] can deal with only the piecewise smooth differential systems whose unperturbed systems are smooth. Llibre and Novaes [18] recently obtained an averaging theory of first order for studying periodic orbits of the piecewise smooth differential systems, whose unperturbed systems have periodic orbits filled with a lower dimensional submanifold. This result is a generalization of that in [6] for smooth differential systems to discontinuous piecewise smooth differential systems. The author provided also an application of their theory to piecewise linear differential systems with the unperturbed system is smooth. As we knew, all examples in [17, 18, 20] showing applications of the averaging theories for piecewise smooth differential systems have smooth unperturbed differential systems.

In this paper we first establish an averaging theory of arbitrary order for studying periodic orbits of one dimensional piecewise smooth periodic differential equations. We will see the averaging functions will be more complicated than the smooth cases. The reason is that the variational equations of the unperturbed systems along their periodic orbits are piecewise smooth, and their solutions have complicated expressions than the smooth cases. Second we use our developed averaging theory to study planar piecewise smooth autonomous differential systems whose unperturbed systems have a family of periodic orbits, which are separated by a straight line. For doing so, we use a generalized polar coordinate change of variables to transform the planar piecewise smooth differential systems to one dimensional piecewise smooth periodic differential equations, and then we compute the exact expressions of the averaging functions. Finally we apply our averaging theory to study limit cycle bifurcation of planar piecewise polynomial differential systems with a nondegenerate \(\Sigma \)-center, which is a normal form of a family of piecewise differential systems with \(\Sigma \)-center (posed by Buzzi et al. [5]). We prove that for any \(n\in \mathbb N\) there exists a piecewise polynomial perturbation of degree n which can have n limit cycles using the first order averaging method; and that piecewise linear perturbations of the nondegenerate \(\Sigma \)-center can produce 2 limit cycles using the second order averaging method.

Consider a piecewise analytic and periodic differential equation

$$\begin{aligned} \frac{dx}{dt}=\sum \limits _{k\ge 0}\varepsilon ^k F_k(t,x)=: F(t,x,\varepsilon ), \quad (t,x)\in \mathbb S^1\times D, \end{aligned}$$
(1)

with \(\mathbb S^1=\mathbb R /(T\mathbb Z)\), \(D\subset \mathbb R\) an open interval, and

$$\begin{aligned} F_k(t,x)={\left\{ \begin{array}{ll} F_k^1(t,x), &{}\quad t\in [0,T_1],\\ F_k^2(t,x), &{}\quad t\in [T_1,T], \end{array}\right. } \end{aligned}$$

where \(T_1\in [0, T)\), \(F^i_k:\mathbb {S}^1\times D\longrightarrow \mathbb {R}\), \(i=1,2\), are analytic functions, and are periodic of period T in t, and \(\varepsilon \in (-\varepsilon _0,\varepsilon _0)\) with \(\varepsilon _0\) a small positive real number. Of course, if \(T_1=0\) then Eq. (1) is smooth. Here ‘piecewise analytic’ means that \(F(t,x,\varepsilon )\) are analytic in both of the regions \([0,T_1]\times D\times (-\varepsilon _0,\varepsilon _0)\) and \([T_1,T]\times D\times (-\varepsilon _0,\varepsilon _0)\).

Denote by \(x(t;z,\varepsilon )\) the solution of system (1) with the initial condition \(x(0)=z\). In what follows we also use the notation \(x(t;T_1,z,\varepsilon )\) to denote the solution of system (1) satisfying the initial condition \(x(T_1)=z\).

Assume that the unperturbed equation of Eq. (1), i.e.

$$\begin{aligned} \frac{dx}{dt}=F_0(t,x), \end{aligned}$$
(2)

has a family of periodic orbits of period T, which are filled with a region of \(\mathbb {S}^1\times D\). Let \(x_0(t;z)\) be a solution of the unperturbed system (2) satisfying \(x_0(0)=z\). Note that the solution \(x_0(t;z)\) of Eq. (2) can be seen as a composition of the solution \(x_0^1(t;z)\) of the initial value problem

$$\begin{aligned} \frac{dx}{dt}=F_0^1(t,x), \qquad x(0)=z, \end{aligned}$$
(3)

when \(t\in [0,T_1]\), and of the solution \(x_0^2(t;T_1,w)\) of the initial value problem

$$\begin{aligned} \frac{dx}{dt}=F_0^2(t,x), \qquad x(T_1)=w:=x_0^1(T_1,z), \end{aligned}$$

when \(t\in [T_1,T]\). That is,

$$\begin{aligned} x_0(t,z)=\left\{ \begin{array}{ll} x_0^1(t;z),\quad &{}\quad t\in [0,T_1],\\ x_0^2(t-T_1; x_0^1(T_1,z))=x_0^2(t;T_1,w), \quad &{} \quad t\in [T_1,T]. \end{array}\right. \end{aligned}$$

So \(x_0(t;z)\) is analytic in z because of the analyticity of the composition of two analytic functions. Since \(x_0(t;z)\) are a family of periodic orbits for \(z\in D\), it follows that the solution \(x_0(t;z)\) is monotonic in the initial value z. So one has

$$\begin{aligned} \frac{\partial x_0(t;z)}{\partial z}\ne 0, \quad \text{ for } \ t\in [0,T], \text{ and } z \text{ is } \text{ not } \text{ a } \text{ constant } \text{ solution } \text{ of } (2). \end{aligned}$$

In what follows, for simplicity we use \(x_0^2(t,z)\) to represent \(x_0^2(t;T_1,x_0^1(T_1,z))\).

Due to the piecewise analyticity of system (1), by similar arguments than \(x_0(t;z)\) we get that the solution \(x(t;z,\varepsilon )\) of system (1) satisfying \(x(0)=z\) is analytic in z and \(\varepsilon \). Hence it can be written as

$$\begin{aligned} x(t;z,\varepsilon )=\sum \limits _{n\ge 0}x_n(t,z)\varepsilon ^n. \end{aligned}$$
(4)

The following result provides exact expressions of the functions \(x_n(t,z)\) in (4).

Theorem 1

Assume that \(x_0(t;z)\) is the solution of Eq. (2) satisfying the initial condition \(x_0(0)=z\). Then the solution (4) of Eq. (1) with the initial condition \(x(0)=z\) has the expressions

$$\begin{aligned} x_1(t,z)&=\frac{\partial x_0(t;z)}{\partial z}\int \limits _0^{t}\frac{F_1(s,x_0(s;z))}{\frac{\partial x_0(s;z)}{\partial z}}ds,\\ x_k(t,z)&=\frac{\partial x_0(t;z)}{\partial z}\int \limits _0^{t}\frac{R_k(s,z)}{\frac{\partial x_0(s;z)}{\partial z}}ds,\quad k\ge 2, \end{aligned}$$

where

$$\begin{aligned}&R_k(t,z)=F_k(t,x_0(t;z))+\sum \limits _{l=0}^{k-2}\sum \limits _{j=1}^{k-l}\left( \frac{1}{j!}\frac{\partial ^{j}F_{k-l-j}}{\partial x^j}(t,x_0(t;z))\right. \\&\quad \qquad \qquad \qquad \qquad \qquad \left. \times \sum \limits _{m_1+\cdots +m_j=l+j}x_{m_1}(t,z)\times \cdots \times x_{m_j}(t,z)\right) , \end{aligned}$$

is analytic in both of the regions \([0,T_1]\times D\) and \([T_1,T]\times D\), and \(m_i\ge 1\) is an integer for \(i=1,2,\cdots ,j\).

The proof of Theorem 1 will be given in Sect. 2. As we will see, the treatments for smooth differential systems will not work here well, because now the variational equations of unperturbed systems along the given periodic solutions are piecewise smooth. We will carefully compute the solutions of the piecewise linear variational equations, and use them to express the averaging functions of arbitrary order, for more details, see the proof of Theorem 1, and especially of Lemma 6.

We remark that Theorem 1 also holds for system (1) to be piecewise \(C^\infty \) instead of piecewise analytic when we study any finite order averaging function. The proof follows from the same arguments as those of Theorem 1 by taking a sufficiently high cutoff of (4).

We define the averaging functions \(h_1, h_k: D\rightarrow \mathbb R\), \(k=2,3,\ldots \), as

$$\begin{aligned} h_1(z)&=\int \limits _0^{T}\frac{F_1(s,x_0(s;z))}{\frac{\partial x_0(s;z)}{\partial z}}ds, \end{aligned}$$
(5)
$$\begin{aligned} h_k(z)&=\displaystyle \int \limits _0^{T}\frac{R_k(s,z)}{\frac{\partial x_0(s;z)}{\partial z}}ds,\quad k=2,3,\ldots \end{aligned}$$
(6)

Recall that \(\frac{\partial x_0(s;z)}{\partial z}\ne 0\). So \(h_k(z)\)’s are well defined for all \(k\in \mathbb N\). For the later application we present the precise expression of \(h_2\).

(7)

Now we can state the arbitrary order averaging theory for piecewise smooth periodic differential equations (1).

Theorem 2

Assume that the solution \(x_0(t;z)\) of Eq. (2) satisfying the initial condition \(x_0(0)=z\), is of T–periodic for all \(z\in D\). The following statements hold.

(a):

If \(h_1(z)\) is not identically zero, then for each simple root \(z^*\) of \(h_1(z)=0\) and \(|\varepsilon |\ne 0\) sufficiently small, there exists a T–periodic solution \(x(t;\phi (\varepsilon ),\varepsilon )\) of Eq. (1) such that \(x(0;\phi (\varepsilon ),\varepsilon )\rightarrow z^*\) as \(\varepsilon \rightarrow 0\), where \(\phi \) is an analytic function which satisfies \(\phi (0)=z^*\).

(b):

Assume \(h_1(z),\cdots ,h_{n-1}(z)\) are identically zero in D. If \(h_n(z)\) is not identically zero, then for each simple root \(z^*\) of \(h_n(z)=0\) and \(|\varepsilon |\ne 0\) sufficiently small, there exists a T–periodic solution \(x(t;\phi (\varepsilon ),\varepsilon )\) of Eq. (1) such that \(x(0;\phi (\varepsilon ),\varepsilon )\rightarrow z^*\) as \(\varepsilon \rightarrow 0\), where \(\phi \) is an analytic function which satisfies \(\phi (0)=z^*\).

The proof of Theorem 2 will be given in Sect. 2.

Next we will apply Theorems 1 and 2 to study limit cycle bifurcation of planar piecewise smooth differential systems. Consider the planar piecewise smooth differential system

$$\begin{aligned} \begin{aligned} \dot{x}=\sum \limits _{k=0}^{\infty }\varepsilon ^k p_k(x,y),\\ \dot{y}=\sum \limits _{k=0}^{\infty }\varepsilon ^k q_k(x,y), \end{aligned}\qquad (x,y)\in \Omega \subset \mathbb R^2, \end{aligned}$$
(8)

with \(\Omega \) a connected open subset and \(\Omega \cap \{y=0\}\ne \emptyset \), where

$$\begin{aligned} p_k(x,y)= & {} {\left\{ \begin{array}{ll} ~p^+_k(x,y),\quad \quad (x,y)\in \Omega ^+:=\Omega \cap \{y\ge 0\}, \\ ~p^-_k(x,y),\quad \quad (x,y)\in \Omega ^-:=\Omega \cap \{y\le 0\}, \end{array}\right. }\\ q_k(x,y)= & {} {\left\{ \begin{array}{ll} ~q^+_k(x,y),\quad \quad (x,y)\in \Omega ^+:=\Omega \cap \{y\ge 0\}, \\ ~q^-_k(x,y),\quad \quad (x,y)\in \Omega ^-:=\Omega \cap \{y\le 0\}. \end{array}\right. } \end{aligned}$$

We assume that the unperturbed systems of (8), i.e.

$$\begin{aligned} \dot{x}=p_0(x,y), \qquad \dot{y}=q_0(x,y), \qquad (x,y)\in \Omega , \end{aligned}$$

has periodic orbits filled with an open region around the origin. Here we assume that the origin is in the region \(\Omega \). We take a generalized polar coordinate change of coordinates

$$\begin{aligned} x=\xi _1^\pm (r)\eta _1^\pm (\theta ),\quad y=\xi _2^\pm (r)\eta _2^\pm (\theta ), \end{aligned}$$
(9)

in the upper and lower half planes respectively, to transform system (8) to a one dimensional piecewise smooth periodic differential equation of form (1), i.e.

$$\begin{aligned} \frac{dr}{d\theta }=\widetilde{F}(\theta ,r,\varepsilon ) ={\left\{ \begin{array}{ll} \sum \limits _{k\ge 0} \varepsilon ^k F^+_k(\theta ,r),\quad \theta \in [0,\pi ],\\ \sum \limits _{k\ge 0} \varepsilon ^k F^-_k(\theta ,r),\quad \theta \in [\pi ,2\pi ], \end{array}\right. } \end{aligned}$$
(10)

where \(\widetilde{F}(\theta ,r,\varepsilon )\) is \(2\pi \) periodic in \(\theta \), and \(F^+_k(\theta ,r)\) and \(F^-_k(\theta ,r)\) are analytic in respectively the regions \([0,\pi ]\times J\) and \([\pi ,2\pi ]\times J\) with \(J=(0, \sigma )\) an open interval. The concrete expressions of \(F^\pm _k(\theta ,r)\) will be given in (29) for \(k=0,1,2\). We remark that the concrete expression of the transformation (9) depends on specific differential systems.

Denote by \(r^+_0(\theta ,z)\) the solution of Eq. (10) with the initial condition \(r^+_0(0,z)=z\) in the upper half plane, and by \(r^-_0(\theta ,z)\) the solution of differential equation (10) with the initial condition \(r^-_0(\pi ,z)=r^+_0(\pi ,z)\) in the lower half plane. In order to simplify notations, we set

$$\begin{aligned}&r^\pm _0:=r^\pm _0(\theta ,z),\\&\widetilde{p}^\pm _{i}:=p^\pm _{i}\left( \xi ^\pm _{1}(r^\pm _0(\theta ,z))\eta ^\pm _1(\theta ),\xi ^\pm _{2}(r^\pm _0(\theta ,z))\eta ^\pm _2(\theta )\right) ,\\&\widetilde{q}^\pm _{i}:=q^\pm _{i}\left( \xi ^\pm _{1}(r^\pm _0(\theta ,z))\eta ^\pm _1(\theta ),\xi ^\pm _{2}(r^\pm _0(\theta ,z))\eta ^\pm _2(\theta )\right) ,\ \ \ i=0,1,2,\ldots \end{aligned}$$

Applying Theorem 1 to Eq. (10), we can get the averaging functions associated to system (8). But since their expressions are very complicated, we only present the first order averaging function here.

Theorem 3

For the planar piecewise analytic system (8), if its unperturbed system has a family of periodic orbits filled with a period annulus, which can be parameterized by the generalized polar coordinates (9), then the first order averaging function has the expression

$$\begin{aligned} H_1(z)=&\int _0^\pi \frac{-\widetilde{p}_1^+ \widetilde{q}_0^++\widetilde{p}_0^+ \widetilde{q}_1^+}{\frac{\partial r_0^+(\theta ,z)}{\partial z}\Big (-(\xi _2^+)'(r_0^+)\eta _2^+ \widetilde{p}_0^++(\xi _1^+)'(r_0^+) \eta _1^+ \widetilde{q}_0^+\Big )^2}\\&\times \Big (\xi _1^+(r_0^+)(\xi _2^+)'(r_0^+) (\eta _1^+)'\eta _2^+ -(\xi _1^+)'(r_0^+) \xi _2^+(r_0^+)\eta _1^+(\eta _2^+)'\Big )d\theta \\&+\int _\pi ^{2\pi }\frac{-\widetilde{p}_1^- \widetilde{q}_0^-+\widetilde{p}_0^- \widetilde{q}_1^-}{\frac{\partial r_0^-(\theta ,z)}{\partial z} \Big (-(\xi _2^-)'(r_0^-) \eta _2^- \widetilde{p}_0^-+(\xi _1^-)'(r_0^-) \eta _1^- \widetilde{q}_0^-\Big )^2} \\&\times \Big (\xi _1^-(r_0^-)(\xi _2^-)'(r_0^-)(\eta _1^-)' \eta _2^- -(\xi _1^-)'(r_0^-) \xi _2^-(r_0^-)-\eta _1^-(\eta _2^-)'\Big )d\theta , \end{aligned}$$

where \((\xi _i^+)'(r_0^+)=\dfrac{d\xi _i^+}{dr}\circ r_0^+(\theta ,z)\), \((\xi _i^-)'(r_0^-)=\dfrac{d\xi _i^-}{dr}\circ r_0^-(\theta ,z)\), \(i=1,2\), and the \(\circ \) denotes the composition of two functions.

The proof of Theorem 3 will be given in Sect. 3. In appendix we present the formula and its derivation of the associated second order averaging function of system (8).

Finally we will apply Theorems 2 and 3 to study limit cycle bifurcations under the perturbation of the piecewise linear vector field

$$\begin{aligned} {\mathcal {Z}}_0:= {\left\{ \begin{array}{ll} ~(\dot{x},\dot{y})=(-1,2x),\quad \text{ for } y\ge 0,\\ ~(\dot{x},\dot{y})=(1,2x), \quad \text{ for } y\le 0, \end{array}\right. } \end{aligned}$$

which is the topological normal form of a kind of piecewise smooth differential systems around a nondegenerate \(\Sigma \)-center, see [5] for details about the results and definition of \(\Sigma \)-center. The authors in [5] provided an example showing that for any \(m\in \mathbb N\) there exists a \(C^\infty \) perturbation of \({\mathcal {Z}}_0\), which has m limit cycles. Next we will prove that for any \(m\in \mathbb N\) there exists a piecewise polynomial perturbation of degree m which can have m limit cycles.

Consider the perturbed piecewise polynomial differential system

$$\begin{aligned} \left( \begin{array}{ll} \dot{x}\\ \dot{y} \end{array}\right) = {\left\{ \begin{array}{ll} \left( \begin{array}{c} -1\\ 2x \end{array}\right) +\varepsilon \left( \begin{array}{c} F^+_{11}+\varepsilon F^+_{12}\\ F^+_{21}+\varepsilon F^+_{22} \end{array}\right) ,\qquad y\ge 0,\\ \left( \begin{array}{c} 1 \\ 2x \end{array}\right) + \varepsilon \left( \begin{array}{c} F^-_{11}+\varepsilon F^-_{12}\\ F^-_{21}+\varepsilon F^-_{22} \end{array}\right) ,\qquad y\le 0,\\ \end{array}\right. } \end{aligned}$$
(11)

where

$$\begin{aligned} F^\pm _{11}&=\sum _{i+j=0}^{n}a_{ij}^\pm x^iy^j, \quad F^\pm _{12}=\sum _{i+j=0}^{n}c_{ij}^\pm x^iy^j,\\ F^\pm _{21}&=\sum _{i,j=0}^{i+j=n}b_{ij}^\pm x^iy^j, \quad F^\pm _{22}=\sum _{i+j=0}^{n}d_{ij}^\pm x^iy^j. \end{aligned}$$

The next result provides the expression of the first order averaging function associated to system (11).

Theorem 4

For the piecewise smooth system (11), the following statements hold.

(a):

The first order averaging function \(M_1(r)\) associated to system (11) has the form

$$\begin{aligned} M_1(r)=\sum _{l=0}^n B_lr^{2l}. \end{aligned}$$
(b):

System (11) has at most n limit cycles which bifurcate from the \(\Sigma \)-center via the first order averaging method.

(c):

There exists a system of form (11) with \(F_{i2}^{\pm }\equiv 0\), \(i=1,2\), which has k limit cycles for \(k=0,1,\cdots ,n\).

We now turn to piecewise linear differential systems. We will use the second order averaging methods to prove that there exist piecewise linear differential systems of form (11) which can have 2 limit cycles. For system (11) with \(n=1\), we denote by \({\mathcal {A}}\) the set of parameters of system (11) which satisfy

$$\begin{aligned} b_{00}^+-b_{00}^-=0,\qquad b_{01}^++b_{01}^-+a_{10}^++a_{10}^-=0. \end{aligned}$$

This is in fact the condition for the first order averaging function vanishing identically.

Our next result is the following.

Theorem 5

For \(|\varepsilon |\ne 0\) sufficiently small, system (11) when \(n=1\) has at most 2 limit cycles for the parameters inside \({\mathcal {A}}\) by using the second order averaging method, and the maximum number can be achieved.

We remark that when applying the third order averaging method to system (11) with third order linear perturbation we can only get at most 2 limit cycles bifurcated from the \(\Sigma \)-center. At the end of the proof of Theorem 5, i.e. the end of Sect. 4, we will give detail explanation.

This paper is arranged as follows. In Sect. 2 we will prove Theorems 1 and 2. The proof of Theorem 3 will be presented in Sect. 3. The proofs of Theorems 4 and 5 will be given in Sect. 4.

2 Proof of Theorems 1 and 2

2.1 Proof of Theorem 1

Recall that \(x(t;z,\varepsilon )\) is the solution of equation (1) satisfying the initial condition \(x(0)=z\). Set

$$\begin{aligned} x(t;z,\varepsilon )= {\left\{ \begin{array}{ll} ~x^1(t,z,\varepsilon ), &{}\quad t\in [0,T_1],\\ ~x^2(t,z,\varepsilon ), &{}\quad t\in [T_1,T]. \end{array}\right. } \end{aligned}$$

Then \(x^2(T_1,z,\varepsilon )=x^1(T_1,z,\varepsilon )\), and each function \(x^i(t,z,\varepsilon )\) satisfies the piecewise smooth differential equation (1) for \(i=1,2\). That is,

$$\begin{aligned} \frac{\partial }{\partial t}x^i(t,z,\varepsilon )=\sum \limits _{k\ge 0}\varepsilon ^k F^i_k(t,x^i(t,z,\varepsilon )),\quad i=1,2. \end{aligned}$$
(12)

Note that \(x^1(0,z,\varepsilon )=z\), we will also write \(x^1(t, z,\varepsilon )\) in \(x^1(t;z,\varepsilon )\) for reminding readers the initial condition taking at \(t=0\).

Since \(F_k^i(t,x)\) in equation (1) are analytic for \(i=1,2\), the solution \(x^i(t,z,\varepsilon )\) of system (12) can be written as

$$\begin{aligned} x^i(t,z,\varepsilon )=\sum _{j\ge 0}x^i_j(t,z)\varepsilon ^j. \end{aligned}$$
(13)

Then the functions \(x_j(t,z)\) in (4) can be expressed in

$$\begin{aligned}x_j(t,z)= {\left\{ \begin{array}{ll} x_j^1(t,z),\quad t\in [0,T_1],\\ x_j^2(t,z),\quad t\in [T_1,T], \end{array}\right. } \end{aligned}$$

with \(x_j^1(T_1,z)=x_j^2(T_1,z)\) for all nonnegative integers j. As explained in the introduction for equation (2), the functions \(x_j(t,z)\)’s are analytic in z. Substituting the solution (13) into the differential equation (12), and expanding \(F_k^i(t,x^i(t,z,\varepsilon ))\) in Taylor series give

$$\begin{aligned} F_k^i(t,x^i(t,z,\varepsilon ))&=F_k^i\left( t,x^i_0(t,z)+\sum _{j\ge 1}x^i_j(t,z)\varepsilon ^j\right) \\ \quad&=F_k^i(t,x^i_0(t,z))+\sum _{l\ge 1}\frac{1}{l!}\frac{\partial ^l}{\partial x^l}F_k^i(t,x^i_0(t,z))\left( \sum _{j\ge 1}x^i_j(t,z)\varepsilon ^j\right) ^l\\ \quad&=F_k^i(t,x^i_0(t,z))+\sum _{l\ge 1}\varepsilon ^l\sum _{j=1}^l\Delta ^i_{klj}(t,z), \end{aligned}$$

where \(i=1,2\), and

$$\begin{aligned} \Delta ^i_{klj}(t,z)=\frac{1}{j!}\frac{\partial ^j}{\partial x^j}F^i_k(t,x^i_0(t,z))\sum _{m_1+\cdots +m_j=l}x^i_{m_1}(t,z)\times \cdots \times x^i_{m_j}(t,z), \end{aligned}$$
(14)

with \(m_n\in \mathbb N\) for \(n=1,2,\cdots ,j\). Then the right hand side of (12) can be written as

$$\begin{aligned} \sum \limits _{k\ge 0}\varepsilon ^k F^i_k(t,x^i(t,z,\varepsilon ))&=\sum \limits _{k\ge 0}\varepsilon ^kF_k^i(t,x^i_0(t,z))+\sum _{k\ge 0}\sum _{l\ge 1}\varepsilon ^{k+l}\sum _{j=1}^l\Delta ^i_{klj}(t,z)\nonumber \\&=\sum \limits _{k\ge 0}\varepsilon ^kF_k^i(t,x^i_0(t,z))+\sum _{n\ge 1}\varepsilon ^n\sum _{l=1}^n\sum _{j=1}^l\Delta ^i_{n-l,lj}(t,z)\nonumber \\&=F_0^i(t,x^i_0(t,z))+\sum _{k\ge 1}\varepsilon ^k\left( F_k^i(t,x^i_0(t,z))+\sum _{l=1}^k\sum _{j=1}^l\Delta ^i_{k-l,lj}(t,z)\right) . \end{aligned}$$
(15)

It is clear that

$$\begin{aligned} \frac{\partial }{\partial t}x^i(t,z,\varepsilon )=\sum _{j\ge 0}\frac{\partial }{\partial t}x^i_j(t,z)\varepsilon ^j. \end{aligned}$$
(16)

Using (15) and (16), and equating the coefficients of \(\varepsilon \) in (12) yield

$$\begin{aligned} \frac{\partial }{\partial t}x^i_1(t,z)=F_1^i(t,x_0^i(t,z))+\frac{\partial }{\partial x}F_0^i(t,x_0^i(t,z))x^i_1(t,z),\quad i=1,2, \end{aligned}$$
(17)

which are nonhomogeneous linear differential equations.

Let u (tz) be the solution of the variational equation

$$\begin{aligned} \frac{\partial u }{\partial t}=\frac{\partial }{\partial x}F_0(t,x_0(t;z))u,\qquad u(0,z)=1 \end{aligned}$$
(18)

of the unperturbed equation of (1) along the solution \(x_0(t;z)\). Set

$$\begin{aligned} u(t,z)={\left\{ \begin{array}{ll} u^1(t,z),\quad t\in [0,T_1],\\ u^2(t,z),\quad t\in [T_1,T]. \end{array}\right. } \end{aligned}$$
(19)

Then \(u^i(t,z)\) satisfies the variational equation

$$\begin{aligned} \frac{\partial }{\partial t}u^i(t,z)=\frac{\partial }{\partial x}F^i_0(t,x^i_0(t,z))u^i(t,z),\quad i=1,2. \end{aligned}$$
(20)

The next result characterizes properties of the solution \(u^i(t,z)\) of the variational equation.

Lemma 6

Assume that the unperturbed equation of (1) has a solution \(x_0(t;z)\) which satisfies \(x_0(0)=z\) for each \(z\in D\) and is T–periodic. Let u(tz) be the solution of the variational problem (18). Using the notations in (19), we have

$$\begin{aligned} u^1(t,z)=\frac{\partial }{\partial z} x^1_0(t,z),\qquad u^2(t,z)=\frac{1}{\frac{\partial }{\partial z}x^1_0(T_1,z)}\frac{\partial }{\partial z}x^2_0(t,z), \end{aligned}$$

and they satisfy \(u^1(T_1,z)u^2(T,z)=1\) and \(u^1(0,z)=u^2(T_1,z)=1\).

Proof

Since \(x_0(t;z)\) is a solution of the unperturbed differential equation of equation (1), one has

$$\begin{aligned} \frac{\partial }{\partial t}x_0(t;z)=F_0(t,x_0(t;z)),\qquad x_0(0;z)=z. \end{aligned}$$

It follows that

$$\begin{aligned} \frac{\partial }{\partial t} x^i_0(t,z)=F^i_0(t,x^i_0(t,z)),\quad i=1,2. \end{aligned}$$
(21)

For \(i=1, x_0^1(t,z)\) satisfies the initial condition \(x_0^1(0,z)=z\). So it will also be denoted by \(x_0^1(t;z)\). Since \(F_0^i\in C^\omega (\mathbb S^1\times D)\), it follows that \(x_0^1(t;z)\) is analytic in \([0,T_1]\times D\). Moreover the variational equation (20) with \(i=1\) satisfying \(u^1(0,z)=1\) has the solution

$$\begin{aligned} u^1(t,z)=\frac{\partial }{\partial z}x^1_0(t;z). \end{aligned}$$

For \(i=2\), note that the solution \(x_0^2(t,z)\) satisfies the condition \(x_0^2(T_1,z)=x_0^1(T_1;z)\). Set

$$\begin{aligned} w:=x_0^1(T_1;z). \end{aligned}$$

One has

$$\begin{aligned} x_0^2(t,z)=x_0^2(t; T_1,x_0^1(T_1,z))=x_0^2(t; T_1,w). \end{aligned}$$

Then

$$\begin{aligned} \frac{\partial }{\partial t} x^2_0(t; T_1,w)=F^2_0(t, x^2_0(t ; T_1,w)),\quad x_0^2(T_1; T_1,w))=w. \end{aligned}$$

So its associated variational initial value problem

$$\begin{aligned} \frac{\partial u^2}{\partial t} =\frac{\partial F^2_0(t, x^2_0(t; T_1,w))}{\partial x} u^2,\quad u^2(T_1,z)=1, \end{aligned}$$

has the solution

$$\begin{aligned} u^2(t,z)=\phi ^2(t,w)=\frac{\partial x^2_0(t; T_1,w)}{\partial w}. \end{aligned}$$

In addition, since

$$\begin{aligned} \frac{\partial x^2_0(t; T_1,w)}{\partial w} = \frac{\partial }{\partial w}x_0^2(t; T_1,x_0^1(T_1,z)) =\frac{1}{\frac{\partial }{\partial z}x^1_0(T_1;z)}\frac{\partial }{\partial z}x_0^2(t; T_1,x_0^1(T_1,z)), \end{aligned}$$

and \(x_0^2(t,z)=x_0^2(t; T_1,x_0^1(T_1,z))\), it follows that

$$\begin{aligned} u^2(t,z)= \phi ^2(t,w) =\frac{1}{\frac{\partial }{\partial z}x^1_0(T_1;z)}\frac{\partial }{\partial z}x^2_0(t,z). \end{aligned}$$
(22)

Consequently we have

$$\begin{aligned} u^2(T_1,z)=1, \end{aligned}$$

where we have used the fact that \(x_0^2(T_1,z)=x_0^1(T_1;z)\). Furthermore, since \(u^1(t,z)=\frac{\partial }{\partial z}x^1_0(t;z)\), one gets from (22) that

$$\begin{aligned} u^1(T_1,z)u^2(T,z)=\frac{\partial }{\partial z}x^2_0(T,z)=1, \end{aligned}$$

where in the last equality we have used the fact that \(x^2_0(T,z)=z\). This completes the proof of the lemma.

In order to study the solutions \(x_1^i(t,z)\) of Eq. (17) for \(i=1,2\), we set

$$\begin{aligned} x_1^i(t,z)=u^i(t,z)u_1^i(t,z),\quad i=1,2 \end{aligned}$$
(23)

Since \(x_1^1(0,z)=0\) and \(u^1(0,z)=u^2(T_1,z)=1\) by Lemma 6, we have

$$\begin{aligned} u^1_1(0,z)=0,\quad \text{ and } \quad u^2_1(T_1,z)=x^2_1(T_1,z)=x^1_1(T_1,z). \end{aligned}$$

Substituting (23) into (17), we get respectively

$$\begin{aligned} u_1^1(t,z)&=\displaystyle \int ^t_0\frac{1}{u^1(s,z)}F_1^1(s,x^1_0(s,z))ds, \end{aligned}$$
(24)
$$\begin{aligned} u_1^2(t,z)&=x^1_1(T_1,z)+\displaystyle \int ^t_{T_1}\frac{1}{u^2(s,z)}F_1^2(s,x^2_0(s,z))ds. \end{aligned}$$
(25)

For \(t\in [0,T_1]\), substituting (24) into the expression (23) we get from Lemma 6 that

$$\begin{aligned} x_1(t,z)=x_1^1(t,z)&=u^1(t,z)\displaystyle \int ^t_0\frac{1}{u^1(s,z)}F_1^1(s,x^1_0(s,z))ds\\&=\frac{\partial x^1_0(t,z)}{\partial z}\displaystyle \int ^t_0\frac{1}{\frac{\partial x^1_0(s,z)}{\partial z}}F^1_1(s,x_0^1(s,z))ds. \end{aligned}$$

For \(t\in [T_1,T]\), substituting (25) into the expression (23), together with Lemma 6 and the last expression, one has

$$\begin{aligned} x_1(t,z)&=x_1^2(t,z)\\&=u^2(t,z)\left( x^1_1(T_1,z)+\displaystyle \int ^t_{T_1}\frac{1}{u^2(s,z)}F_1^2(s,x^2_0(s,z))ds\right) \\&=u^2(t,z)\left( u^1(T_1,z)\displaystyle \int ^{T_1}_0\frac{1}{u^1(s,z)}F_1^1(s,x^1_0(s,z))ds\right. \\&\qquad \qquad \qquad \qquad \qquad \quad \left. +\displaystyle \int ^t_{T_1}\frac{1}{u^2(s,z)}F_1^2(s,x^2_0(s,z))ds\right) \\&=\frac{\partial x^2_0(t,z)}{\partial z}\left( \displaystyle \int ^{T_1}_0\frac{1}{\frac{\partial x^1_0(s,z)}{\partial z}}F_1^1(s,x^1_0(s,z))ds\right. \\&\qquad \qquad \qquad \qquad \qquad \quad \left. +\displaystyle \int ^t_{T_1}\frac{1}{\frac{\partial x^2_0(s,z)}{\partial z}}F_1^2(s,x^2_0(s,z))ds\right) \\&=\frac{\partial x_0(t,z)}{\partial z}\displaystyle \int ^t_0\frac{1}{\frac{\partial x_0(s,z)}{\partial z}}F_1(s,x_0(s,z))ds. \end{aligned}$$

where in the third and fourth equalities we have used Lemma 6.

For \(k\ge 2\), equating the coefficients of \(\varepsilon ^k\) in (12) together with the expansions (16) and (15) gives

$$\begin{aligned} \frac{\partial }{\partial t}x^i_k(t,z)&=F_k^i(t,x^i_0(t,z))+\sum _{l=1}^k\sum _{j=1}^l\Delta ^i_{k-l,lj}(t, z)\nonumber \\&=F_k^i(t,x^i_0(t,z))+\sum _{l=0}^{k-1}\sum _{j=1}^{k-l}\Delta ^i_{k-l-j,l+j,j}(t, z)\nonumber \\&=F_k^i(t,x^i_0(t,z))+\sum _{l=0}^{k-1}\sum _{j=1}^{k-2}\Delta ^i_{k-l-j,l+j,j}(t, z)\nonumber \\&\qquad \qquad +\Delta ^i_{0,k,1}(t, z)\nonumber \\&=\frac{\partial }{\partial x}F^i(t,x^i_0(t,z))x^i_k(t,z)+R^i_k(t,z), \end{aligned}$$
(26)

where

$$\begin{aligned} R^i_k(t,z)=F_k^i(t,x^i_0(t,z)))+\sum _{l=0}^{k-2}\sum _{j=1}^{k-l}\Delta ^i_{k-l-j,l+j,j}(t, z). \end{aligned}$$

Clearly,

$$\begin{aligned} R_k(t,z)= {\left\{ \begin{array}{ll} R^1_k(t,z),\quad t\in [0,T_1],\\ R^2_k(t,z),\quad t\in [T_1,T], \end{array}\right. } \end{aligned}$$

with \(R_k^1(t,z)\) and \(R_k^2(t,z)\) analytic in respectively \([0,T_1]\times D\) and \([T_1,T]\times D\). Note that equations (26) are linear differential equations. Working in a similar way to solve equation (17), one gets the solution of the linear differential equation (26):

$$\begin{aligned} x_k(t,z)= {\left\{ \begin{array}{ll} \frac{\partial x^1_0(t,z)}{\partial z}\displaystyle \int ^t_0\frac{1}{\frac{\partial x^1_0(s,z)}{\partial z}}R^1_k(s,x^1_0(s,z))ds,\quad \quad \quad \quad t\in [0,T_1],\\ \frac{\partial x^2_0(t,z)}{\partial z}\left( \displaystyle \int ^{T_1}_0\frac{1}{\frac{\partial x^1_0(s,z)}{\partial z}}R^1_k(s,x^1_0(s,z))ds\right. \\ \qquad \qquad \qquad \left. +\displaystyle \int ^t_{T_1}\frac{1}{\frac{\partial x^2_0(s,z)}{\partial z}}R^2_k(s,x^2_0(s,z))ds\right) ,\quad t\in [T_1,T]. \end{array}\right. } \end{aligned}$$

We complete the proof of the theorem. \(\square \)

2.2 Proof of Theorem 2

By assumption that the unperturbed equation of (1) has a family of periodic solutons \(x_0(t;z)\) of period T, \(z\in D\), we can define the Poincaré map of the perturbed periodic differential equation (1) for \(|\varepsilon |\) sufficiently small:

$$\begin{aligned} P_\varepsilon (z)=x(T;z,\varepsilon )-z=\sum \limits _{j\ge 1}x_j(T,z)\varepsilon ^j, \end{aligned}$$

where \(x(t;z,\varepsilon )\) is the solution of equation (1) satisfying the initial condition \(x(0)=z\). In expansion of \(P_\varepsilon (z)\) we have used the fact that \(x(T;z,0)=x_0(T;z)=z\).

(a) Define

$$\begin{aligned} f(z,\varepsilon )=(x(T;z,\varepsilon )-z)/\varepsilon . \end{aligned}$$

Clearly it is analytic in its variables. Note from Theorem 1 that the averaging function \(h_1(z)\) is \(x_1(T,z)\). One has

$$\begin{aligned} f(z,\varepsilon )=h_1(z)+\varepsilon \varphi (z,\varepsilon ). \end{aligned}$$

Clearly, \(x(t;z,\varepsilon )\) is a T–periodic solution for \(z\in D\) if and only if \(f(z,\varepsilon )=0\). According to the conditions in Theorem 2(a), the function \(f(z,\varepsilon )\) satisfies

$$\begin{aligned} f(z^*,0)=0,\quad \frac{\partial f(z^*,0)}{\partial z}=\frac{\partial h_1(z^*)}{\partial z}\ne 0. \end{aligned}$$

By the Implicit Function Theorem, there exists a unique analytic function \(z=\phi (\varepsilon )\) defined in a neighborhood of \(\varepsilon =0\) such that \(f(\phi (\varepsilon ),\varepsilon )=0\) with \(\phi (0)=z^*\). That is

$$\begin{aligned} x(T;\phi (\varepsilon ),\varepsilon )\equiv \phi (\varepsilon ). \end{aligned}$$

Therefore \(x(t;\phi (\varepsilon ),\varepsilon )\) is a periodic solution of period T of the perturbed differential equation (1) such that \(x(t,\phi (\varepsilon ),\varepsilon )\rightarrow z^*\) when \(\varepsilon \rightarrow 0\).

(b) Since \(h_1(z)=\ldots =h_{n-1}(z)\equiv 0\), and \(h_n(z)\not \equiv 0\), \(n\ge 2\), we define

$$\begin{aligned} g(z,\varepsilon ) =(x(T,z,\varepsilon )-z)/\varepsilon ^n. \end{aligned}$$

Clearly it is analytic. We have

$$\begin{aligned} g(z,\varepsilon )=x_n(T,z)+O(\varepsilon ). \end{aligned}$$

Note from Theorem 1 that the averaging function \(h_n(z)\) is \(x_n(T,z)\). Then the proof follows from the same arguments as those in the proof of (a).

We complete the proof of the theorem. \(\square \)

3 Proof of Theorem 3

For the planar piecewise smooth differential system (8), we transform it, by a generalized polar coordinate change of variables \((x,y)=\xi ^+(r)\eta ^+(\theta )\) in the upper half plane and \((x,y)=\xi ^-(r)\eta ^-(\theta )\) in the lower half plane respectively, to a one dimensional piecewise smooth differential equation of form (1). And the averaging functions can be presented by (5) and (6). Some calculations show that system (8) can be written in the upper and lower half planes respectively as

$$\begin{aligned} \begin{aligned} \frac{d r}{d\theta }&=\frac{\sum \limits _{k\ge 0}\varepsilon ^k\left( \xi _2^\pm (\eta _2^\pm )'p_k^\pm -\xi _1^\pm (\eta _1^\pm )'q_k^\pm \right) }{\sum \limits _{k\ge 0}\varepsilon ^k\left( -(\xi _2^\pm )'\eta _2^\pm p_k^\pm +(\xi _1^\pm )'\eta _1^\pm q_k^\pm \right) }, \end{aligned} \end{aligned}$$
(27)

where \('\) denotes the derivative of the functions with respect to their variables, \(\xi _i^\pm :=\xi _i^\pm (r), \eta _i^\pm :=\eta _i^\pm (\theta )\), \(i=1,2\), and for \(k\in \mathbb Z_+=\mathbb N\cup \{0\}\)

$$\begin{aligned}&p_k^+:=p_k^+(\xi _1^+(r)\eta _1^+(\theta ), \xi _2^+(r)\eta _2^+(\theta )), \quad q_k^+:=q_k^+(\xi _1^+(r)\eta _1^+(\theta ),\xi _2^+(r)\eta _2^+(\theta )),\\&p_k^-:=p_k^-(\xi _1^-(r)\eta _1^-(\theta ), \xi _2^-(r)\eta _2^-(\theta )), \quad q_k^-:=q_k^-(\xi _1^-(r)\eta _1^-(\theta ),\xi _2^-(r)\eta _2^-(\theta )). \end{aligned}$$

Set

$$\begin{aligned}&f_k^\pm (\theta ,r):=\xi _2^\pm (\eta _2^\pm )'p_k^\pm -\xi _1^\pm (\eta _1^\pm )'q_k^\pm ,\\&g_k^\pm (\theta ,r):=-(\xi _2^\pm )'\eta _2^\pm p_k^\pm +(\xi _1^\pm )'\eta _1^\pm q_k^\pm . \end{aligned}$$

We have

$$\begin{aligned} \frac{d r}{d\theta }= & {} \sum \limits _{k\ge 0}\varepsilon ^k \frac{f_k^\pm (\theta ,r)}{g_0^\pm (\theta ,r)}\left( 1+\sum \limits _{j\ge 1}(-\frac{1}{g_0^\pm (\theta ,r)})^j\left( \sum \limits _{k=1}\varepsilon ^k g_k^\pm (\theta ,r)\right) ^j\right) \nonumber \\= & {} \sum \limits _{k\ge 0}\varepsilon ^k \frac{f_k^\pm (\theta ,r)}{g_0^\pm (\theta ,r)}\nonumber \\&\times \,\left( 1+\sum \limits _{l\ge 1}\varepsilon ^l\sum \limits _{j=1}^{l}\left( -\frac{1}{g_0^\pm (\theta ,r)}\right) ^j\sum \limits _{m_1+\cdots +m_j=l}g_{m_1}^\pm (\theta ,r)\times \cdots \times g_{m_j}^\pm (\theta ,r)\right) \nonumber \\= & {} \frac{f_0^\pm (\theta ,r)}{g_0^\pm (\theta ,r)}+\frac{1}{g_0^\pm (\theta ,r)}\sum \limits _{k\ge 1}\varepsilon ^k\left( f_k^\pm (\theta ,r)+\sum \limits _{l=1}^kf_{k-l}^\pm (\theta ,r)\right. \nonumber \\&\quad \left. \qquad \times \sum \limits _{j=1}^l\left( -\frac{1}{g_0^\pm (\theta ,r)}\right) ^j\sum \limits _{m_1+\cdots +m_j=l}g_{m_1}^\pm (\theta ,r)\times \cdots \times g_{m_j}^\pm (\theta ,r)\right) \nonumber \\= & {} \sum \limits _{k\ge 0}\varepsilon ^kF^\pm _k(\theta ,r), \end{aligned}$$
(28)

where

$$\begin{aligned} F^\pm _0(\theta ,r)= & {} \frac{f_0^\pm (\theta ,r)}{g_0^\pm (\theta ,r)}=\frac{\xi _2^\pm (\eta _2^\pm )'p_0^\pm -\xi _1^\pm (\eta _1^\pm )'q_0^\pm }{-(\xi _2^\pm )'\eta _2^\pm p_0^\pm +(\xi _1^\pm )'\eta _1^\pm q_0^\pm },\nonumber \\ F^\pm _1(\theta ,r)= & {} \frac{1}{(g_0^\pm (\theta ,r))^2}\left( f_1^\pm (\theta ,r)g_0^\pm (\theta ,r)-f_0^\pm (\theta ,r)g_1^\pm (\theta ,r)\right) \nonumber \\= & {} \frac{\Big (\xi _1^\pm (\xi _2^\pm )'(\eta _1^\pm )'\eta _2-(\xi _1^\pm )'\xi _2^\pm \eta _1(\eta _2^\pm )'\Big )(-p_1^\pm q_0^\pm +p_0^\pm q_1^\pm )}{\left( -(\xi _2^\pm )'\eta _2^\pm p_0^\pm +(\xi _1^\pm )'\eta _1^\pm q_0^\pm \right) ^2},\nonumber \\ F^\pm _2(\theta ,r)= & {} \frac{1}{(g_0^\pm (\theta ,r))^3}\Big (g_0^\pm (\theta ,r)\big (f_2^\pm (\theta ,r)g_0^\pm (\theta ,r)-f_0^\pm (\theta ,r)g_2^\pm (\theta ,r)\big )\nonumber \\&+g_1^\pm (\theta ,r)\big (f_0^\pm (\theta ,r)g_1^\pm (\theta ,r)-f_1^\pm (\theta ,r)g_0^\pm (\theta ,r)\big )\Big )\nonumber \\= & {} \frac{\Big (\xi _1^\pm (\xi _2^\pm )'(\eta _1^\pm )'\eta _2-(\xi _1^\pm )'\xi _2^\pm \eta _1(\eta _2^\pm )'\Big )(-p_2^\pm q_0^\pm +p_0^\pm q_2^\pm )}{\left( -(\xi _2^\pm )'\eta _2^\pm p_0^\pm +(\xi _1^\pm )'\eta _1^\pm q_0^\pm \right) ^2}\nonumber \\&+\frac{\Big (\xi _2^\pm (\xi _1^\pm )'(\eta _2^\pm )'\eta _1-(\xi _2^\pm )'\xi _1^\pm \eta _2(\eta _1^\pm )'\Big )(-p_1^\pm q_0^\pm +p_0^\pm q_1^\pm )}{\left( -(\xi _2^\pm )'\eta _2^\pm p_0^\pm +(\xi _1^\pm )'\eta _1^\pm q_0^\pm \right) ^3}\nonumber \\&\times \left( -(\xi _2^\pm )'\eta _2^\pm p_1^\pm +(\xi _1^\pm )'\eta _1^\pm q_1^\pm \right) . \end{aligned}$$
(29)

From the expression (5), we have

$$\begin{aligned} H_1(z)=\int \limits _0^{\pi }\frac{F^+_1(s,r^+_0(s,z))}{\frac{\partial r^+_0(s,z)}{\partial z}}ds+\int \limits _{\pi }^{2\pi }\frac{F^-_1(s,r^-_0(s,z))}{\frac{\partial r^-_0(s,z)}{\partial z}}ds, \end{aligned}$$
(30)

Substituting \(F_i^\pm (\theta ,r)\) for \(i=0,1,2\) in (29) into this last expression, we get the concrete expression of \(H_1(z)\) as shown in Theorem 3.

We complete the proof of the theorem. \(\square \)

4 Proof of Theorems 4 and 5

For the planar piecewise smooth differential system (11), we can parameterize it in the generalized polar coordinates \((x,y)=(r\cos \theta ,r^2\sin \theta )\). As we will see, this change of coordinates are more useful than the standard one. After this change system (11) can be written in

$$\begin{aligned} \frac{dr}{d\theta }= {\left\{ \begin{array}{ll} r\dfrac{-r\cos \theta +2r\sin \theta \cos \theta +\varepsilon U^+(\theta ,r)}{2r\cos ^2\theta +2r\sin \theta +\varepsilon V^+(\theta ,r)}, &{}\qquad \theta \in [0,\pi ],\\ r\dfrac{r\cos \theta +2r\sin \theta \cos \theta +\varepsilon U^-(\theta ,r)}{2r\cos ^2\theta -2r\sin \theta +\varepsilon V^-(\theta ,r)}, &{}\qquad \theta \in [\pi ,2\pi ], \end{array}\right. } \end{aligned}$$
(31)

where

$$\begin{aligned} U^+(\theta ,r)&= r\cos \theta F_{11}^++\sin \theta F_{21}^+ \\&\quad +\varepsilon (r\cos \theta F_{12}^++\sin \theta F_{22}^+)+\varepsilon ^2(r\cos \theta F_{13}^++\sin \theta F_{23}^+), \\ V^+(\theta ,r)&=\cos \theta F_{21}^+-2r\sin \theta F_{11}^+ \\&\quad +\varepsilon (\cos \theta F_{22}^+-2r\sin \theta F_{12}^+) +\varepsilon ^2(\cos \theta F_{23}^+-2r\sin \theta F_{13}^+),\\ U^-(\theta ,r)&= r\cos \theta F_{11}^-+\sin \theta F_{21}^- \\&\quad +\varepsilon (r\cos \theta F_{12}^-+\sin \theta F_{22}^-) +\varepsilon ^2(r\cos \theta F_{13}^-+\sin \theta F_{23}^-), \\ V^-(\theta ,r)&=\cos \theta F_{21}^--2r\sin \theta F_{11}^- \\&\quad +\varepsilon (\cos \theta F_{22}^--2r\sin \theta F_{12}^-) +\varepsilon ^2(\cos \theta F_{23}^--2r\sin \theta F_{13}^-).\\ \end{aligned}$$

We can check that \(\cos ^2\theta +\sin \theta >0\), \(\theta \in [0,\pi ]\), and \(\cos ^2\theta -\sin \theta >0\), \(\theta \in [\pi ,2\pi ]\). Then we can expand the right hand side of equation (31) in Taylor series.

For \(\theta \in [0,\pi ]\), by Taylor expansion in a neighborhood of \(\varepsilon =0\) one has

$$\begin{aligned} \frac{dr}{d\theta }=F_0^+(\theta ,r)+\varepsilon F_1^+(\theta ,r)+\varepsilon ^2F_2^+(\theta ,r)+O(\varepsilon ^3), \end{aligned}$$
(32)

where

$$\begin{aligned} F_0^+(\theta ,r)= & {} \frac{-\cos \theta +2\sin \theta \cos \theta }{2(\cos ^2\theta +\sin \theta )}r,\nonumber \\ F_1^+(\theta ,r)= & {} \frac{1}{4(\cos ^2\theta +\sin \theta )^2}\left( 2r\cos \theta (1+\sin ^2\theta )F_{11}^++(1+\sin ^2\theta )F_{21}^+\right) ,\nonumber \\ F_2^+(\theta ,r)= & {} \frac{1}{8(\cos ^2\theta +\sin \theta )^3}\left( 4r\cos \theta \sin \theta (1+\sin ^2\theta )\left( F_{11}^+\right) ^2\right. \nonumber \\&+2(1+\sin ^2\theta )(\sin \theta -\cos ^2\theta )F_{11}^+F_{21}^+-\frac{1}{r}\cos \theta (1+\sin ^2\theta )\left( F_{21}^+\right) ^2\nonumber \\&+4r\cos \theta (1+\sin ^2\theta )(\sin \theta +\cos ^2\theta )F_{12}^+\nonumber \\&\left. +2(1+\sin ^2\theta )(\sin \theta +\cos ^2\theta )F_{22}^+\right) , \end{aligned}$$
(33)

with \(F_{ij}^+:=F_{ij}^+(r\cos \theta ,r^2\sin \theta )\). Some calculations show that the unperturbed equation \((32)|_{\varepsilon =0}\) with the initial point \((\theta ,r)=(0,z)\), \(z\ge 0\), has the solution

$$\begin{aligned} r_0^+(\theta ,z)=\frac{z}{\sqrt{\cos ^2\theta +\sin \theta }}. \end{aligned}$$
(34)

For \(\theta \in [\pi ,2\pi ]\), similar to the manipulations in the case \(\theta \in [0,\pi ]\) one gets

$$\begin{aligned} \frac{dr}{d\theta }=F_0^-(\theta ,r)+\varepsilon F_1^-(\theta ,r)+\varepsilon ^2F_2^-(\theta ,r)+ O(\varepsilon ^3), \end{aligned}$$
(35)

where

$$\begin{aligned} F_0^-(\theta ,r)= & {} \frac{\cos \theta +2\sin \theta \cos \theta }{2(\cos ^2\theta -\sin \theta )}r,\nonumber \\ F_1^-(\theta ,r)= & {} \frac{1}{4(\cos ^2\theta -\sin \theta )^2}\left( 2r\cos \theta (1+\sin ^2\theta )F_{11}^--(1+\sin ^2\theta )F_{21}^-\right) ,\nonumber \\ F_2^-(\theta ,r)= & {} \frac{1}{8(\cos ^2\theta -\sin \theta )^3}\left( 4r\cos \theta \sin \theta (1+\sin ^2\theta )\left( F_{11}^-\right) ^2\right. \nonumber \\&-2(1+\sin ^2\theta )(\sin \theta +\cos ^2\theta )F_{11}^-F_{21}^-+\frac{1}{r}\cos \theta (1+\sin ^2\theta )\left( F_{21}^-\right) ^2\nonumber \\&+4r\cos \theta (1+\sin ^2\theta )(\cos ^2\theta -\sin \theta )F_{12}^-\nonumber \\&\left. -2(1+\sin ^2\theta )(\cos ^2\theta -\sin \theta )F_{22}^-\right) , \end{aligned}$$
(36)

with \(F_{ij}^-:=F_{ij}^-(r\cos \theta ,r^2\sin \theta )\). Recall that \(\cos ^2\theta -\sin \theta >0\), \(\theta \in [\pi ,2\pi ]\). It follows that all functions \(F_{ij}^-\)’s are analytic. Clearly, the unperturbed equation \((35)|_{\varepsilon =0}\) with the initial point \((0,r^+_0(\pi ,z))=(0,z)\) has the solution

$$\begin{aligned} r_0^-(\theta ,z)=\frac{z}{\sqrt{\cos ^2\theta -\sin \theta }}. \end{aligned}$$
(37)

4.1 Proof of Theorem 4

For computing the first order averaging function, which is associated to system (8), we can set \(F_{i2}^{\pm } \equiv 0\), \(i=1,2\).

According to the expression (30), set

$$\begin{aligned} H_{11}(z):=\int \limits _0^{\pi }\frac{F^+_1(s,r^+_0(s,z))}{\frac{\partial r^+_0(s,z)}{\partial z}}ds, \quad H_{12}(z):=\int \limits _{\pi }^{2\pi }\frac{F^-_1(s,r^-_0(s,z))}{\frac{\partial r^-_0(s,z)}{\partial z}}ds. \end{aligned}$$

From (11) and (32) one has

$$\begin{aligned} \begin{aligned}&\frac{F^+_1(s,r^+_0(s,z))}{\frac{\partial r^+_0(s,z)}{\partial z}}\\&=\frac{1}{4}(\cos ^2\theta +\sin \theta )^{-\frac{3}{2}}\left( 2(1+\sin ^2\theta )\sum \limits _{i+j=0}^n a_{ij}^+(r^+_0(s,z))^{i+2j+1}\cos ^{i+1}\theta \sin ^j\theta \right. \\&\qquad \qquad \qquad \qquad \qquad \left. +(1+\sin ^2\theta )\sum \limits _{i+j=0}^n b_{ij}^+(r^+_0(s,z))^{i+2j}\cos ^{i}\theta \sin ^j\theta \right) \\&=\frac{1}{2}\sum \limits _{i+j=0}^n a_{ij}^+z^{i+2j+1}(\cos ^2\theta +\sin \theta )^{-\frac{i+2j+4}{2}}\cos ^{i+1}\theta \sin ^j\theta (1+\sin ^2\theta )\\&\quad +\frac{1}{4}\sum \limits _{i+j=0}^n b_{ij}^+z^{i+2j+1}(\cos ^2\theta +\sin \theta )^{-\frac{i+2j+3}{2}}\cos ^{i}\theta \sin ^j\theta (1+\sin ^2\theta ). \end{aligned} \end{aligned}$$

Then

$$\begin{aligned} \begin{aligned} H_{11}=\frac{1}{2}\sum \limits _{i+j=0}^n a_{ij}^+I_{i+1,j}^+z^{i+2j+1}+\frac{1}{4}\sum \limits _{i+j=0}^n b_{ij}^+I_{ij}^+z^{i+2j}, \end{aligned} \end{aligned}$$
(38)

where

$$\begin{aligned} I_{ij}^+=\displaystyle \int \limits _0^{\pi }(\cos ^2\theta +\sin \theta )^{-\frac{i+2j+3}{2}}\cos ^{i}\theta \sin ^j\theta (1+\sin ^2\theta )d\theta . \end{aligned}$$
(39)

For \(k\in \{1,\ldots ,2n\}\), collecting the coefficients of \(z^k\) in the two summations \(\sum \)’s of the expression (38), one gets

$$\begin{aligned} H_{11}= & {} \frac{1}{2}\left( \sum \limits _{k=1}^{n+1}z^k\sum \limits _{j=0}^{[\frac{k-1}{2}]}a^+_{k-2j-1,j}I^+_{k-2j,j} +\sum \limits _{k=n+2}^{2n+1}z^k\sum \limits _{j=k-n-1}^{[\frac{k-1}{2}]}a^+_{k-2j-1,j}I^+_{k-2j,j}\right) \nonumber \\&+\frac{1}{4}\left( \sum \limits _{k=0}^{n}z^k\sum \limits _{j=0}^{[\frac{k}{2}]}b^+_{k-2j,j}I^+_{k-2j,j} +\sum \limits _{k=n+1}^{2n}z^k\sum \limits _{j=k-n}^{[\frac{k}{2}]}b^+_{k-2j,j}I^+_{k-2j,j}\right) \nonumber \\= & {} \sum \limits ^{2n+1}_{k=0}\widetilde{J}_k z^k, \end{aligned}$$
(40)

where

$$\begin{aligned} \widetilde{J}_k= {\left\{ \begin{array}{ll} \frac{1}{4}b_{00}^+I_{00}^+, &{}k=0,\\ \frac{1}{2}\sum \limits _{j=0}^{[\frac{k-1}{2}]}a^+_{k-2j-1,j}I^+_{k-2j,j}+\frac{1}{4}\sum \limits _{j=0}^{[\frac{k}{2}]}b^+_{k-2j,j}I^+_{k-2j,j}, &{}1\le k\le n,\\ \frac{1}{2}\sum \limits _{j=0}^{[\frac{n}{2}]}a^+_{n-2j,j}I^+_{n-2j+1,j}+\frac{1}{4}\sum \limits _{j=0}^{[\frac{n+1}{2}]}b^+_{n-2j+1,j}I^+_{n-2j+1,j}, &{}k= n+1,\\ \frac{1}{2}\sum \limits _{j=k-n-1}^{[\frac{k-1}{2}]}a^+_{k-2j-1,j}I^+_{k-2j,j}+\frac{1}{4}\sum \limits _{j=k-n}^{[\frac{k}{2}]}b^+_{k-2j,j}I^+_{k-2j,j}, &{}n+2\le k\le 2n,\\ \frac{1}{2}a_{0n}^+I_{1n}^+, &{}k=2n+1. \end{array}\right. } \end{aligned}$$
(41)

In order to study the properties of the functions \(I_{ij}^+\), we need the formulas

$$\begin{aligned} \begin{aligned}&\displaystyle \int (\cos ^2\theta +\sin \theta )^{-\frac{i+3}{2}}\cos ^{i}\theta (1+\sin ^2\theta )d\theta \\&\qquad \qquad \qquad \qquad = -\frac{2}{i+1}\cos ^{i+1}\theta (\cos ^2\theta +\sin \theta )^{-\frac{i+1}{2}}, \end{aligned} \end{aligned}$$
(42)

where we omit the integrating constant. By this we get

$$\begin{aligned} \begin{aligned}&\displaystyle \int (\cos ^2\theta +\sin \theta )^{-\frac{i+2j+3}{2}}\cos ^{i}\theta \sin ^{j}\theta (1+\sin ^2\theta )d\theta \\&=\sum \limits _{l=0}^j(-1)^{j-l}C_j^l \int (\cos ^2\theta +\sin \theta )^{-\frac{i+2(j-l)+3}{2}}\cos ^{i+2(j-l)}\theta (1+\sin ^2\theta )d\theta , \end{aligned} \end{aligned}$$
(43)

where \(C_j^l=\left( \begin{array}{c} j\\ l\end{array}\right) =\frac{j!}{l!(j-l)!}\). So it follows from (42) that

$$\begin{aligned} I_{i0}^+= {\left\{ \begin{array}{ll} 0,&{} \quad i~\text{ odd },\\ \dfrac{4}{i+1},&{}\quad i~\text{ even }, \end{array}\right. } \end{aligned}$$

and from (43) that

$$\begin{aligned} I_{ij}^+= {\left\{ \begin{array}{ll} 0,&{}\quad i~\text{ odd },\\ \sum \limits _{l=0}^j(-1)^{j-l}\dfrac{4 C_j^l}{i+2(j-l)+1},&{}\quad i~\text{ even }. \end{array}\right. } \end{aligned}$$
(44)

Using those expressions of \(I_{ij}^+\)’s we get easily that

$$\begin{aligned} \widetilde{J}_k=0, \qquad \text{ for } k \text{ odd }. \end{aligned}$$

From the expressions of \(H_{11}\) in (40) and of \(\widetilde{J}_k\) in (41), one can rewrite \(H_{11}\) as

$$\begin{aligned} H_{11}=\sum \limits ^{n}_{k=0}J^+_k z^{2k}, \end{aligned}$$
(45)

where

$$\begin{aligned} J^+_k=\left\{ \begin{array}{ll} \dfrac{1}{4}b_{00}^+I_{00}^+, &{} k=0,\\ \dfrac{1}{2}\sum \limits _{j=0}^{k-1}a^+_{2k-2j-1,j}I^+_{2k-2j,j}+\dfrac{1}{4}\sum \limits _{j=0}^{k}b^+_{2k-2j,j}I^+_{2k-2j,j}, &{}2k\in \{ 1,\ldots n\},\\ \dfrac{1}{2}\sum \limits _{j=0}^{[\frac{n}{2}]}a^+_{n-2j,j}I^+_{n-2j+1,j}+\dfrac{1}{4}\sum \limits _{j=1}^{[\frac{n+1}{2}]}b^+_{n-2j+1,j}I^+_{n-2j+1,j}, &{} 2k= n+1,\\ \dfrac{1}{2}\sum \limits _{j=2k-n-1}^{k-1}a^+_{2(k-j)-1,j}I^+_{2(k-j),j} &{} \\ +\dfrac{1}{4}\sum \limits _{j=2k-n}^{k}b^+_{2(k-j),j}I^+_{2(k-j),j}, &{} 2k\in \{n+2,\ldots , 2n\}. \end{array}\right. \end{aligned}$$
(46)

Some similar calculations to those of \(H_{11}\) show that

$$\begin{aligned} H_{12}=\sum \limits ^{n}_{k=0}J^-_k z^{2k}, \end{aligned}$$
(47)

where

$$\begin{aligned} J^-_k= \left\{ \begin{array}{ll} -\dfrac{1}{4}b_{00}^-I_{00}^-, &{} k=0,\\ \dfrac{1}{2}\sum \limits _{j=0}^{k-1}a^-_{2k-2j-1,j}I^-_{2k-2j,j}-\dfrac{1}{4}\sum \limits _{j=0}^{k}b^-_{2k-2j,j}I^-_{2k-2j,j}, &{} 2k\in \{ 1,\ldots , n\},\\ \dfrac{1}{2}\sum \limits _{j=0}^{[\frac{n}{2}]}a^-_{n-2j,j}I^-_{n-2j+1,j}-\dfrac{1}{4}\sum \limits _{j=1}^{[\frac{n+1}{2}]}b^-_{n-2j+1,j}I^-_{n-2j+1,j}, &{} 2k= n+1,\\ \dfrac{1}{2}\sum \limits _{j=2k-n-1}^{k-1}a^-_{2(k-j)-1,j}I^-_{2(k-j),j}&{} \\ -\dfrac{1}{4}\sum \limits _{j=2k-n}^{k}b^-_{2(k-j),j}I^-_{2(k-j),j}, &{} 2k\in \{ n+2,\ldots , 2n\}, \end{array}\right. \end{aligned}$$
(48)

with

$$\begin{aligned} I_{ij}^-= {\left\{ \begin{array}{ll} 0,&{}i~\text{ odd },\\ \sum \limits _{l=0}^j(-1)^{l}\frac{4 C_j^l}{i+2(j-l)+1},\quad &{}i~\text{ even }, \end{array}\right. } \end{aligned}$$
(49)

Substituting (44)–(49) into (30), one gets the first order averaging function

$$\begin{aligned} H_{1}=\sum \limits ^{n}_{k=0}B_k z^{2k}, \end{aligned}$$
(50)

where

$$\begin{aligned} B_k=\left\{ \begin{array}{ll} b_{00}^+-b_{00}^-, &{} k=0,\\ \sum \limits _{j=0}^{k-1}\sum \limits _{l=0}^{j}\dfrac{2(-1)^lC_j^l}{2k-2l+1}\left( (-1)^ja^+_{2k-2j-1,j}+a^-_{2k-2j-1,j}\right) &{} \\ \quad + \sum \limits _{j=0}^{k}\sum \limits _{l=0}^{j}\dfrac{ (-1)^lC_j^l}{2k-2l+1}\left( (-1)^jb^+_{2k-2j-1,j}-b^-_{2k-2j-1,j}\right) , &{} 1\le 2k\le n,\\ \sum \limits _{j=0}^{[\frac{n}{2}]}\sum \limits _{l=0}^{j}\dfrac{2(-1)^lC_j^l}{n-2l+2}\left( (-1)^ja^+_{n-2j,j}+a^-_{n-2j,j}\right) &{} \\ \quad + \sum \limits _{j=1}^{[\frac{n+1}{2}]}\sum \limits _{l=0}^{j}\dfrac{ (-1)^lC_j^l}{n-2l+2}\left( (-1)^jb^+_{n-2j+1,j}-b^-_{n-2j+1,j}\right) , &{} 2k= n+1,\\ \sum \limits _{j=2k-n-1}^{k-1}\sum \limits _{l=0}^{j}\dfrac{2(-1)^lC_j^l}{2k-2l+1}\left( (-1)^ja^+_{2k-2j-1,j}+a^-_{2k-2j-1,j}\right) &{} \\ \quad + \sum \limits _{j=2k-n}^{k}\sum \limits _{l=0}^{j}\dfrac{ (-1)^lC_j^l}{2k-2l+1}\left( (-1)^jb^+_{2k-2j-1,j}-b^-_{2k-2j-1,j}\right) , &{} n+2\le 2k\le 2n. \end{array}\right. \end{aligned}$$
(51)

From the expression of \(B_k\) in (51), we know that the coefficients of \(H_1\) can be chosen arbitrarily provided that the coefficients of \(F_{11}^\pm \) and \(F_{21}^\pm \) in system (11) are arbitrarily chosen. This shows that \(H_1(z)\) has at most n simple roots, and that by suitable choices of the coefficients of \(F_{11}^\pm \) and \(F_{21}^\pm \), \(H_1(z)\) do have exactly k simple roots for \(k=0,1,\cdots ,n\). Then it follows from Theorem 2 that by suitably choosing the values of \(a_{ij}^\pm \) in \(F_{11}^\pm \) and of \(b_{ij}^\pm \) in \(F_{21}^\pm \), system (11) can have k, \(k=0,1,\cdots ,n\), hyperbolic limit cycles via the first order averaging method.

We complete the proof of Theorem 4. \(\square \)

4.2 Proof of Theorems 5

Now the perturbed differential system (11) is piecewise linear, i.e. \(n=1\). From the proof of Theorem 4 it follows that the first order averaging function which is associated to system (11) with \(n=1\) is

$$\begin{aligned} H_1(z)=\frac{2}{3}\left( a_{10}^++b_{01}^++a_{10}^-+b_{01}^-\right) z^2+b_{00}^+-b_{00}^-. \end{aligned}$$
(52)

According to Theorem 1, substituting (33)–(34) and (36)–(37) into (6) and (7) together with some calculations we get that the second order averaging function associated to system (11) is

$$\begin{aligned} H_2(z)&=I_0+I_1z+I_2z^2+I_3z^3+I_4z^4, \end{aligned}$$
(53)

where

$$\begin{aligned} I_0&=\frac{1}{2}(b_{00}^-b_{10}^--b_{00}^+b_{10}^+)+d^+_{00}-d^-_{00},\\ I_1&=\frac{2}{3}\left( b_{00}^+(a_{10}^++b_{01}^+)+(2b_{00}^+-b_{00}^-)(a_{10}^-+b_{01}^-)\right) ,\\ I_2&=\frac{2}{3}\left( a_{00}^+(a_{10}^++b_{01}^+)-a_{00}^-(a_{10}^-+b_{01}^-)+c^+_{10}+c^-_{10}+d^+_{01}+d^-_{01}\right) ,\\ I_3&=\frac{4}{9}(a_{10}^++b_{01}^++a_{10}^-+b_{01}^-)^2,\\ I_4&=\frac{4}{15}\left( a^+_{01}(a_{10}^++b_{01}^+)+a^-_{01}(a_{10}^-+b_{01}^-)\right) . \end{aligned}$$

Proof of Theorem 5. Under the conditions \({\mathcal {A}}\), that is \(H_{1}(z)\equiv 0\) in (52), the expression of \(H_{2}(z)\) in (53) can be simplified to

$$\begin{aligned} H_{2}(z)=\,&\frac{4}{15}(a^+_{10}+b_{01}^+)(a^+_{01}-a^-_{01})z^4\\&+\frac{2}{3}((a^+_{10}+b_{01}^+)(a^+_{00}+a_{00}^-)+c^+_{10}+c^-_{10}+d^+_{01}+d^-_{01})z^2\\&+\frac{1}{2}b^-_{00}(b^-_{10}-b^+_{10})+d^+_{00}-d^-_{00}. \end{aligned}$$

Clearly, \(H_{2}(z)\) can have 2 simple roots when we choose suitable values of the parameters. Consequently the piecewise linear differential system (11) can have 2 hyperbolic limit cycles, which are obtained by the second order averaging method. Clearly through the second order averaging method we can get at most two limit cycles. This proves the theorem. \(\square \)

Remark 1

Of course, by suitable choice of the parameters \(H_{2}(z)\) can have exactly k simple roots for \(k=0,1,2\). Then system (11) can have k hyperbolic limit cycles, which are obtained by the second order averaging method.

Remark 2

Under the conditions \(H_1(z)\equiv 0\) in (52) and \(H_2(z)\equiv 0\) in (53), applying the third order averaging function in Theorem 1, together with some calculations, we get the expression of \(H_3(z)\) as

$$\begin{aligned} H_{3}(z)= & {} \frac{4}{15}(a^-_{01}-a^+_{01})(c^-_{10}+d_{01}^-)z^4-\frac{2}{3}(a^+_{00}+a_{00}^-)(c^-_{10}+d_{01}^-)z^2\\&+\,\frac{1}{4}b^-_{00}b^-_{10}(b^+_{10}-b^-_{10})+\frac{1}{2}b^-_{00}(d^-_{10}-d^+_{10})\\&+\,\frac{1}{2}d^-_{00}(b^-_{10}-b^+_{10})+q^+_{00}-q^-_{00}. \end{aligned}$$

Clearly \(H_3(z)\) has at most 2 simple positive zeros. Consequently system (11) has at most 2 limit cycles using the third order averaging method.