1 Introduction

We consider equations and systems at resonance. A textbook example of resonance involves an equation like

$$\begin{aligned} x''(t)+x(t)=\sin t . \end{aligned}$$

All solutions of the corresponding homogeneous equation are bounded, while the periodic forcing term produces an unbounded response. A similar situation occurs for (\(t>0\))

$$\begin{aligned} x''(t)+x'(t)=1+\sin t . \end{aligned}$$

This is also a case of resonance, for which we shall consider nonlinear perturbations, involving pendulum-like equations, and study both periodic and unbounded solutions. We shall also deal with the resonance for first order periodic equations and systems, particularly in the context of Massera’s theorem.

Consider a system with an \(n \times n\) p-periodic matrix A(t) and a p-periodic vector \(f(t) \in R^n\)

$$\begin{aligned} x'=A(t)x+f(t) , \end{aligned}$$
(1.1)

and the corresponding homogeneous system

$$\begin{aligned} x'=A(t)x . \end{aligned}$$
(1.2)

The famous theorem of J.L. Massera [14] says: if (1.1) has a bounded solution (\(||x(t)|| \le c\) uniformly in \(t>0\)), then it has a p-periodic solution. The original statement of Massera’s theorem is very intriguing, but it appears to be not easy to use. Indeed, if one manages to construct an explicit bounded solution of a periodic system, chances are that solution is already periodic. We shall deal with the following contraposition form of Massera’s theorem.

Theorem 1.1

(Massera [14]) If (1.1) has no periodic solution, then all of its solutions are unbounded as \(t \rightarrow \infty \). Moreover, (1.2) has a p-periodic solution.

This form appears to be more natural. In particular, it becomes clear that Massera’s theorem deals with a case of resonance, and that this classical theorem admits a natural extension to a rather complete result, with detailed description of the dynamics of (1.1), see Theorem 2.1.

When studying equations with periodic coefficients a natural first step is to investigate the existence of periodic solutions. What could be the second step? Traditionally, one studies the stability of periodic solutions, see e.g., B.P. Demidovič [7]. Motivated by the second Massera’s theorem, see e.g., p. 203 in [8] (or R. Ortega [15] for a detailed presentation), G. Seifert [16] and J.M. Alonso and R. Ortega [1] showed that in case periodic solutions are absent, one can prove that all solutions are unbounded for equations at or near resonance. We present similar instability results for semilinear perturbations of linear systems:

$$\begin{aligned} x'+A(t)x+f(x)=g(t) , \end{aligned}$$
(1.3)

in case the Landesman–Lazer type condition is violated (rather than the Lazer-Leach condition used in [1, 16] and [6]). In the process we develop some results on periodic solutions of (1.3).

For a class of pendulum-like equations of the type

$$\begin{aligned} x''(t)+\lambda x'(t)+g(x)=f(t) , \end{aligned}$$

with periodic f(t), and for similar first order equations, we relied on a detailed description of the curves of periodic solutions developed in Korman [9, 10], to get conditions that are both necessary and sufficient for the existence of periodic solutions at resonance. Again, we obtained instability results in case periodic solutions are absent. We showed on an example how our results lead to an exhaustive description of the dynamics, supporting our findings with rather non-standard numerical computations.

We summarize. There are a number of situations when conditions that are both necessary and sufficient are available for the existence of periodic solutions. In case such conditions are violated, we show that all solutions are unbounded, thus extending the previous work of Seifert [16], Alonso and Ortega [1], and Boscaggin et al [3]. Classical Massera’s theorem was extended, placed in a broader context, and shown to fit the same pattern.

2 An Extension of Massera’s Theorem

To motivate the discussion, we begin with a simple case of a single equation

$$\begin{aligned} x'(t)+a(t)x(t)=f(t) , \end{aligned}$$
(2.1)

with continuous p-periodic functions a(t) and f(t), so that \(a(t+p)=a(t)\) and \(f(t+p)=f(t)\) for some \(p>0\), and all t. Write its general solution as

$$\begin{aligned} x(t)=\frac{1}{\mu (t)} c+\frac{1}{\mu (t)} \int _0^t \mu (s) f(s) \, ds, \end{aligned}$$
(2.2)

where \(\mu (t)=e^{\int _0^t a(s) \, ds}\), and c is an arbitrary constant. This formula shows that the dynamics is simple in case \(\int _0^p a(s) \, ds \ne 0\). Then there exists a unique p-periodic solution that attracts all other solutions as \(t \rightarrow \infty \) if \(\int _0^p a(s) \, ds>0\), and as \(t \rightarrow -\infty \), in case \(\int _0^p a(s) \, ds<0\), see e.g., [8] for the details. More interesting is the case

$$\begin{aligned} \int _0^p a(s) \, ds=0 , \end{aligned}$$
(2.3)

when the corresponding homogeneous equation

$$\begin{aligned} x'+a(t)x=0 \end{aligned}$$

has p-periodic solutions \(x(t)=\frac{c}{\mu (t)}\), where \(\mu (t)\) is p-periodic. There are two cases. If \(\int _0^p \mu (s) f(s) \, ds=0\) then clearly all solutions of (2.1) are p-periodic. In case \(\int _0^p \mu (s) f(s) \, ds \ne 0\), all solutions are unbounded as \(t \rightarrow \pm \infty \) (just consider x(mp) with \(m \rightarrow \pm \infty \), and observe that \(x(mp)-x(0)=m \int _0^p \mu (s) f(s) \, ds \), by the periodicity of \(\mu (t)\) and f(t)). So that the condition (2.3) presents a full fledged case of resonance, even though the Eq. (2.1) is of first order.

We consider now a p-periodic system

$$\begin{aligned} x'=A(t)x+f(t) , \end{aligned}$$
(2.4)

and the corresponding homogeneous system

$$\begin{aligned} x'=A(t)x . \end{aligned}$$
(2.5)

We assume that the \(n \times n\) matrix A(t) and the vector \(f(t) \in R^n\) have continuous entries, and \(A(t+p)=A(t)\), \(f(t+p)=f(t)\) for some \(p>0\) and all t. If X(t) is the fundamental solution matrix of (2.5), then the solution of (2.5) satisfying the initial condition \(x(0)=x_0\) is

$$\begin{aligned} x(t)=X(t)x_0 . \end{aligned}$$

For the non-homogeneous system (2.4), the solution satisfying the initial condition \(x(0)=x_0\), and denoted by \(x(t,x_0)\), is given by

$$\begin{aligned} x(t)=X(t)x_0+X(t) \int _0^t X^{-1}(s) f(s) \, ds . \end{aligned}$$
(2.6)

The homogeneous system (2.5) has a p-periodic solution, with \(x(p)=x(0)\), provided that the \(n \times n\) system of linear equations

$$\begin{aligned} \left( I-X(p) \right) x_0=0 \end{aligned}$$
(2.7)

has a non-trivial solution \(x_0\). Define the vector

$$\begin{aligned} b=X(p) \int _0^p X^{-1}(s) f(s) \, ds . \end{aligned}$$
(2.8)

The non-homogeneous system (2.4) has a p-periodic solution, with \(x(p)=x(0)\), provided that the system

$$\begin{aligned} \left( I-X(p) \right) x_0=b \end{aligned}$$
(2.9)

has a solution \(x_0\). If (2.4) has no p-periodic solutions, then the system (2.9) has no solution, so that the matrix \(I-X(p)\) is singular. Then (2.7) has non-trivial solutions, and (2.5) has a p-periodic solution. This justifies the extra claim of Massera’s Theorem 1.1.

In the theorem below we shall assume that the homogeneous system (2.5) has a p-periodic solution. Then the matrix X(p) has an eigenvalue 1, and the spectral radius of X(p) is \(\ge 1\) (recall that the spectral radius \(\rho (X(p))=\max |\lambda _i| \), maximum taken over all eigenvalues of X(p)).

Theorem 2.1

Assume that the homogeneous system (2.5) has a p-periodic solution (so that the matrix \(I-X(p)\) is singular). Let the vector b be defined by (2.8).

Case 1. b does not belong to the range of \(I-X(p)\). Then all solutions of (2.4) are unbounded as \(t \rightarrow \infty \). (The classical Massera’s Theorem 1.1.)

Case 2. b belongs to the range of \(I-X(p)\). Then (2.4) has infinitely many p-periodic solutions. Further sub-cases are as follows.

(i) If moreover \(\rho (X(p))>1\), then (2.4) has also unbounded solutions.

(ii) Assume that \(\rho (X(p))=1\), and \(\lambda =1\) is the only eigenvalue of X(p) on the unit circle \(|\lambda |=1\), and it has as many linearly independent eigenvectors as its multiplicity (i.e., the the Jordan block corresponding to \(\lambda =1\) is diagonal). Then every solution of (2.4) approaches (orbitally) one of its p-periodic solutions, as \(t \rightarrow \infty \).

(iii) Suppose that \(\rho (X(p))=1\), and there are other eigenvalues of X(p) on the unit circle \(|\lambda |=1\), in addition to \(\lambda =1\). Assume that all eigenvalues of X(p) on the unit circle \(|\lambda |=1\) have diagonal Jordan blocks. Then all solutions of (2.4) are bounded, as \(t \rightarrow \infty \).

Proof

Let x(t) be any solution of (2.4), represented by (2.6). We shall consider the iterates x(mp), where m is a positive integer. With b as defined by (2.8)

$$\begin{aligned} x(p)=X(p)x_0+b . \end{aligned}$$

By periodicity, \(x(t+p)\) is also a solution of (2.4), which is equal to x(p) at \(t=0\). Using (2.6) again

$$\begin{aligned} x(t+p)=X(t)x(p)+X(t) \int _0^t X^{-1}(s) f(s) \, ds . \end{aligned}$$

Then

$$\begin{aligned} x(2p)=X(p)x(p)+b=X(p) \left( X(p)x_0+b \right) +b=X^2(p)x_0+X(p)b+b . \end{aligned}$$

By induction, for any integer \(m>0\),

$$\begin{aligned} x(mp)=X^m(p)x_0+\sum _{k=0}^{m-1} X^k(p) b . \end{aligned}$$
(2.10)

Case 1. Assume that b does not belong to the range of \(I-X(p)\). Then the linear system (2.9) has no solutions. Since \(\det \left( I-X(p) \right) ^T=\det \left( I-X(p) \right) =0\), it follows that the system

$$\begin{aligned} \left( I-X(p) \right) ^Tv=0 \end{aligned}$$
(2.11)

has non-trivial solutions, and we claim that it is possible to find a non-trivial solution \(v_0\) of (2.11) for which the scalar product with b satisfies

$$\begin{aligned} (b,v_0) \ne 0 . \end{aligned}$$
(2.12)

Indeed, assuming otherwise, b would be orthogonal to the null-space of \(\left( I-X(p) \right) ^T\), and then the linear system (2.9) would be solvable by the Fredholm alternative, a contradiction. From (2.11), \(v_0=X(p)^Tv_0\), then \(X(p)^Tv_0=X^2(p)^Tv_0\), which gives \(v_0=X^2(p)^Tv_0\), and inductively we get

$$\begin{aligned} v_0=X^k(p)^Tv_0, \; \;\text{ for } \text{ all } \text{ positive } \text{ integers }\,k. \end{aligned}$$
(2.13)

Then by (2.10)

$$\begin{aligned}{} & {} \left( x(mp),v_0 \right) =\left( X^m(p)x_0,v_0 \right) +\sum _{k=0}^{m-1} ( X^k(p) b,v_0)\\{} & {} \quad =(x_0,X^m(p)^T v_0)+\sum _{k=0}^{m-1} ( b,X^k(p)^T v_0)=(x_0, v_0)+m(b, v_0) \rightarrow \infty , \end{aligned}$$

as \(m \rightarrow \infty \), in view of (2.12).

Case 2. Assume now that b belongs to the range of \(I-X(p)\). Then the linear system (2.9) has a solution denoted by \(\bar{x}_0\), and \(x(t,\bar{x}_0)\) is a p-periodic solution of (2.4). Adding to it non-trivial solutions of the corresponding homogeneous system (2.5) produces infinitely many p-periodic solutions of (2.4).

Turning to the sub-cases, from (2.9)

$$\begin{aligned} \bar{x}_0=X(p) \bar{x}_0+b . \end{aligned}$$
(2.14)

Then

$$\begin{aligned} \bar{x}_0=X(p) \left( X(p) \bar{x}_0+b \right) +b=X^2(p) \bar{x}_0+X(p)b +b. \end{aligned}$$

Continuing to use the latest expression for \(\bar{x}_0\) in (2.14), obtain inductively

$$\begin{aligned} \bar{x}_0=X^m(p)\bar{x}_0+\sum _{k=0}^{m-1} X^k(p) b , \end{aligned}$$

so that \(\sum _{k=0}^{m-1} X^k(p) b=\bar{x}_0-X^m(p)\bar{x}_0\). Using this in (2.10), obtain

$$\begin{aligned} x(mp)=\bar{x}_0+X^m(p)\left( x_0-\bar{x}_0 \right) . \end{aligned}$$
(2.15)

In case \(\rho (X(p))>1\) (the sub-case (i)), we can choose a vector \(x_0\) to make x(mp) unbounded, producing an unbounded solution of (2.4) (choose \(x_0-\bar{x}_0\) to be an eigenvector of X(p) corresponding to an eigenvalue \(\lambda \), with \(|\lambda |>1\)).

In the sub-case (ii), assume for simplicity that X(p) has a complete set of eigenvectors \(z_1,z_2, \dots , z_k,\ldots , z_n\), with \(z_1,z_2, \dots , z_k\) corresponding to the eigenvalue \(\lambda =1\) of multiplicity \(k<n\), and the other eigenvectors corresponding to the eigenvalues with \(|\lambda |<1\). Decomposing \(x_0-\bar{x}_0=\sum _{i=1}^n c_iz_i\), obtain (since \(|\lambda |<1\) for all eigenvalues other than \(\lambda =1\))

$$\begin{aligned} X^m(p)\left( x_0-\bar{x}_0 \right) \rightarrow \sum _{i=1}^k c_iz_i \equiv y , \end{aligned}$$

where y is an eigenvector of X(p) corresponding to the eigenvalue \(\lambda =1\). It follows by (2.15) that for any \(x_0\), \(x(mp,x_0) \rightarrow \bar{x}_0+y\), and \( x(t,\bar{x}_0+y )\) is one of the p-periodic solutions of (2.4). By continuous dependence on initial conditions, \(x(t,x_0)\) stays close to the orbit of \( x(t,\bar{x}_0+y )\) over the interval \(t \in \left( mp,(m+1)p \right) \). Taking m large, one sees that \(x(t,x_0)\) gets arbitrarily close to the p-periodic solution \( x(t,\bar{x}_0+y )\), for t large. For the general case, one uses the Jordan normal form of X(p), replacing the eigenvectors corresponding to \(|\lambda |<1\) with the generalized eigenvectors.

In the sub-case (iii), similar arguments show that the sequence \(\{ x(mp) \}\) is bounded for any solution x(t) of (2.4). We claim that then x(t) is bounded. Indeed, solutions of (2.4) can have only a limited change over one period, by continuity, so that an unbounded solution cannot have the sequence \(\{ x(mp) \}\) bounded. \(\square \)

The assumption of Theorem 2.1 that the homogeneous system (2.5) has a p-periodic solution can be seen as a case of resonance. The complementary case when (2.5) does not have a p-periodic solution is easy. Then the matrix \(I-X(p)\) is non-singular, and hence the non-homogeneous system (2.4) has a unique p-periodic solution for any f(t). The difference of any two solutions of (2.5) satisfies (2.4), and therefore this p-periodic solution is stable if \(\rho \left( X(p) \right) <1\), and unstable if \(\rho \left( X(p) \right) >1\).

3 Instability for a Class of First Order Equations

We now consider nonlinear perturbations of first order equations

$$\begin{aligned} x'+a(t)x+g(x)=f(t) , \end{aligned}$$
(3.1)

with \(g(x) \in C^1(R)\), and \(a(t), f(t) \in C(R)\), satisfying \(a(t+p)=a(t)\) and \(f(t+p)=f(t)\) for all t, and some \(p>0\). We assume that

$$\begin{aligned} \int _0^p a(t) \, dt=0 , \end{aligned}$$
(3.2)

so that the linear part of this equation is at resonance. Again, we denote \(\mu (t)=e^{\int _0^t a(s) \, ds}\), which by (3.2) is a p-periodic function. The nonlinear term g(x) is assumed to satisfy a condition of E.M. Landesman and A.C. Lazer [12]: the limits \(g(\infty )\) and \(g(-\infty )\) exist and

$$\begin{aligned} g(-\infty )<g(x)<g(\infty ), \; \;\text{ for } \text{ all }\,x \in (-\infty ,\infty ) . \end{aligned}$$
(3.3)

Theorem 3.1

Assume that (3.2) and (3.3) hold. The equation (3.1) has a p-periodic solution if and only if f(t) satisfies

$$\begin{aligned} g(-\infty ) \int _0^p \mu (t) \, dt<\int _0^p \mu (t) f(t) \, dt<g(\infty ) \int _0^p \mu (t) \, dt . \end{aligned}$$
(3.4)

If in addition to (3.2), (3.3) and (3.4)

$$\begin{aligned} g'(x)>0 , \; \;\text{ for } \text{ all }\,x \in R , \end{aligned}$$
(3.5)

then the equation (3.1) has a unique p-periodic solution that attracts all other solutions as \(t \rightarrow \infty \).

If the conditions (3.2) and (3.3) hold, but (3.4) fails, then all of the solutions of (3.1) are unbounded as \(t \rightarrow \infty \), and as \(t \rightarrow -\infty \).

Proof

Let x(t) be a p-periodic solution of (3.1). Multiply (3.1) by the p-periodic \(\mu (t)>0\), then integrate over (0, p). Integrate by parts, using that \(\mu '=\mu a(t)\) and \(\mu (0)=\mu (p)=1\), to obtain

$$\begin{aligned} \int _0^p g(x) \mu (t) \, dt=\int _0^p f(t)\mu (t) \, dt . \end{aligned}$$
(3.6)

It follows that (3.4) holds, in view of (3.3).

Conversely, assume that (3.4) holds. The existence of p-periodic solution of (3.1) will follow by a simple fixed point argument. Indeed, write solutions of (3.1) as

$$\begin{aligned} x(t)=\frac{1}{\mu (t)} x(0)+\frac{1}{\mu (t)}\left[ \int _0^t \mu (s) f(s) \, ds -\int _0^t g(x(s)) \mu (s) \, ds \right] . \end{aligned}$$

Observe that a(t) is bounded from above and from below by continuity, and the same is true for \(\mu (t)\) by periodicity. Hence if \(A>0\) is large, then x(tA) is large for all \(t \in (0,p)\). Then g(x(t)) is close to \(g(\infty )\), and the term in the square bracket is negative, and hence \(x(p,A)<A\). Similarly, \(x(p,-A)>-A\), for \(A>0\) large. It follows that the continuous Poincaré map \(x_0 \rightarrow x(p,x_0)\) takes the interval \((-A,A)\) into itself. There exists a fixed point, leading to a p-periodic solution.

Assume now that the condition (3.4) fails. Assume for definiteness that

$$\begin{aligned} \int _0^p \mu (t) f(t) \, dt \ge g(\infty ) \int _0^p \mu (t) \, dt , \end{aligned}$$
(3.7)

and the case when \(\int _0^p \mu (t) f(t) \, dt \le g(-\infty ) \int _0^p \mu (t) \, dt\) is similar. Let x(t) be any solution of (3.1). Multiply (3.1) by the p-periodic \(\mu (t)>0\), then integrate over (0, p). Since x(t) is no longer assumed to be periodic, integration by parts produces two extra terms. Similarly to (3.6) obtain

$$\begin{aligned}&x(p)-x(0)=\int _0^p f(t)\mu (t) \, dt-\int _0^p g(x(t)) \mu (t) \, dt\\&>\int _0^p f(t)\mu (t) \, dt-g(\infty ) \int _0^p \mu (t) \, dt \equiv \alpha \ge 0 . \nonumber \end{aligned}$$
(3.8)

Assume first that \(\alpha >0\), i.e., the inequality in (3.7) is strict. Then

$$\begin{aligned} x(p)-x(0)>\alpha >0 . \end{aligned}$$

Apply a similar argument on [p, 2p], and use the periodicity of \(\mu (t)\) and f(t) to get

$$\begin{aligned} x(2p)-x(p)>\alpha >0. \end{aligned}$$

so that \(x(2p)-x(0)>2\alpha \). Then \(x(mp)-x(0)>m\alpha \) for any integer \(m>0\), and hence x(t) is unbounded. In case \(\alpha =0\), we have \(x(p)-x(0)>0\) from (3.8), so that the Poincaré map \(x(0) \rightarrow x(p,x(0))\) satisfies \( x(p,x(0))>x(0)\) for all \(x(0) \in R\). The increasing sequence \(\{x(mp) \}\) has to go to infinity, since otherwise it would have to converge to a limit, which is a fixed point of the Poincaré map. But fixed points are not possible for a map that takes any number into a larger one.

Assume finally that (3.2), (3.3), (3.4) and (3.5) hold. By above, there is a p-periodic solution of (3.1), call it y(t). Let x(t) be any other solution of (3.1), and set \(z(t)=x(t)-y(t)\). By the mean value theorem z(t) satisfies a linear equation

$$\begin{aligned} z'+b(t)z =0, \end{aligned}$$

with p-periodic \(b(t)=a(t)+\int _0^1\,g' \left( sx(t)+(1-s)y(t) \right) \, ds>a(t)\), so that \(\int _0^p b(s) \, ds>0\). It follows that \(z(t) \rightarrow 0\), as \(t \rightarrow \infty \). In particular, this implies that the periodic solution y(t) is unique, and it attracts all other solutions as \(t \rightarrow \infty \). \(\square \)

The argument we gave above for the case \(\alpha =0\) could be replaced by using the following result of Alonso and Ortega [1] (the Corollary 2.3 in [1]).

Proposition 3.1

([1]) Consider a difference equation on a finite dimensional Banach space X:

$$\begin{aligned} \xi _{n+1}=F \left( \xi _{n} \right) , \; \;\; \;n \ge 0 , \end{aligned}$$

where \(F: X \rightarrow X\) is a continuous operator. If there exists a continuous functional V satisfying

$$\begin{aligned} V \left( F(\xi )\right) >V(\xi ) , \; \;\; \;\forall \xi \in X , \end{aligned}$$

then \(\lim _{n \rightarrow \infty } ||\xi _{n}||=\infty \).

Example Consider an equation with the linear part at resonance

$$\begin{aligned} x'(t)+\sin t \, x(t)+ \frac{2}{\pi } \tan ^{-1} x(t)=\nu +\sin t , \end{aligned}$$
(3.9)

where \(\nu \) is a parameter. Here \(p=2 \pi \), \(a(t)=\sin t\), \(f(t)=\nu +\sin t\), \(g(x)=\frac{2}{\pi } \tan ^{-1} x\), so that \(g(-\infty )=-1\) and \(g(\infty )=1\), with \(g'(x)>0\). Calculate \(\mu (t)=e^{1-\cos t}\), \(\int _0^{2 \pi } \mu (t) f(t) \, dt=\nu \int _0^{2 \pi } \mu (t) \, dt\). The condition (3.4) becomes

$$\begin{aligned} -1<\nu <1 . \end{aligned}$$

Theorem 3.1 leads to the following conclusion: If \(\nu \in (-1,1)\) the equation (3.9) has a unique \(2 \pi \)-periodic solution that attracts all of its other solutions as \(t \rightarrow \infty \). If \(\nu \ge 1\) or \(\nu \le -1\), then all solutions of (3.9) are unbounded, both as \(t \rightarrow \infty \) and as \(t \rightarrow -\infty \).

It turns out that \(2\pi \)-periodic solutions of (3.9) tend to infinity as \(\nu \rightarrow \pm 1\), see the Fig. 1. In that figure \(\xi \) is the average of \(2\pi \)-periodic solutions x(t), so that \(x(t)=\xi +X(t)\), with \(\int _0^{2\pi } X(t) \, dt=0\) (see the Theorem 5.3 below.) We used a modification of the Mathematica program presented and explained in [11].

We thus obtained an exhaustive description of the dynamics of (3.9), easily confirmed by numerical experiments.

Fig. 1
figure 1

The curve of \(2\pi \) periodic solutions of (3.9), with their averages \(\xi \) drawn versus \(\nu \), the average of the forcing term

Remark   Suppose that the condition (3.4) holds, but (3.5) does not. Then the equation (3.1) has a p-periodic solution, but the asymptotic behavior of other solutions is an open problem.

4 Unbounded Solutions for a Class of Systems

We recall some basic results on linear periodic systems. Consider the adjoint system for the homogeneous p-periodic system (2.5)

$$\begin{aligned} z'=-A^T(t)z , \end{aligned}$$
(4.1)

where \(A^T\) denotes the transpose. The following two lemmas can be found in Demidovič [7]. We include slightly simpler proofs for completeness.

Lemma 4.1

If the system (2.5) has a non-trivial p-periodic solution, then so does (4.1).

Proof

Let X(t) be again the fundamental solution matrix of (2.5). We are given that X(p) has an eigenvalue \(\lambda =1\). Recall that

$$\begin{aligned} X'=A(t)X . \end{aligned}$$
(4.2)

Let Z(t) be the fundamental solution matrix of (4.1), so that

$$\begin{aligned} Z'=-A^T(t)Z . \end{aligned}$$
(4.3)

We claim that \(Z=Y^{-1}(t)\), where \(Y(t)=X^T(t)\), i.e., \(Z=\left( X^T \right) ^{-1}\). Indeed, using that \(Z'=-Y^{-1}Y'Y^{-1}\) (differentiate \(YY^{-1}=I\), or see e.g., p. 5 in R. Bellman [2]), in order to justify that Z satisfies (4.3) the following equivalent statements must hold:

$$\begin{aligned}{} & {} -Y^{-1}Y'Y^{-1}=-A^T Y^{-1} ,\\{} & {} -Y^{-1}Y'=-A^T ,\\{} & {} Y'=YA^T ,\\{} & {} \left( X^T \right) '=X^T A^T ,\\{} & {} X'=AX , \end{aligned}$$

which is (4.2), proving the claim. The eigenvalues of Y(p) are the same as those of X(p), so that one of them is \(\lambda =1\). The eigenvalues of Z(p) are the reciprocals of those of Y(p), so that one of them is \(\lambda =1\), and the system (4.1) has a p-periodic solution. \(\square \)

Lemma 4.2

Assume that the homogeneous system (2.5) has a p-periodic solution. Then the non-homogeneous system (2.4) has a p-periodic solution if and only if the integral of scalar product

$$\begin{aligned} \int _0^p f(t) \cdot z(t) \, dt=0 , \end{aligned}$$
(4.4)

for every p-periodic solution z(t) of (4.1).

Proof

Let x(t) and z(t) be p-periodic solutions of (2.4) and (4.1) respectively. To prove the necessity part, multiply the equation i of (2.4) by \(z_i\), the equation i of (4.1) by \(x_i\), add the equations, and sum in i (or just take the scalar product of (2.4) and (4.1)). Then integrate, and use the periodicity of solutions to obtain:

$$\begin{aligned} \int _0^p f(t) \cdot z(t) \, dt=\int _0^p \left[ Ax \cdot z- x \cdot A^T z \right] \, dt =0 . \end{aligned}$$

Turning to the sufficiency part, any p-periodic solution of (4.1) can be written as \(z(t)=Z(t)z_0\), where \(z_0\) satisfies

$$\begin{aligned} \left[ I-Z(p) \right] z_0=\left[ I-\left( {X^T} \right) ^{-1}(p) \right] z_0=0 , \end{aligned}$$

which can be written as

$$\begin{aligned} z_0^TX(p)=z_0^T , \end{aligned}$$
(4.5)

or as

$$\begin{aligned} \left( I-X^T(p) \right) z_0=0 . \end{aligned}$$
(4.6)

We are given that (4.4) holds, which can be written as

$$\begin{aligned} 0=\int _0^p \left( Z(t)z_0 \right) ^T f(t) \, dt=z_0^T \int _0^p X^{-1}(t) f(t) \, dt . \end{aligned}$$
(4.7)

Since the system (2.5) has a p-periodic solution, both (2.7) and (4.6) have non-trivial solutions. In order for (2.4) to have a p-periodic solution, the system of equations (2.9) has to be solvable, which requires that the vector b defined in (2.8) must be orthogonal to any solution \(z_0\) of (4.6). Using (4.5) and (4.7), obtain

$$\begin{aligned} b \cdot z_0=z_0^Tb=z_0^T X(p) \int _0^p X^{-1}(t) f(t) \, dt=z_0^T \int _0^p X^{-1}(t) f(t) \, dt=0 . \end{aligned}$$

completing the proof. \(\square \)

We wish to extend the Theorem 3.1 to systems. This can be done in a number of ways. For example, a recent paper of A. Boscaggin et al [3] considered coupled harmonic oscillators, each one at resonance. They show that solutions are unbounded if a condition of A.C. Lazer and D.E. Leach [13] type is violated. We shall obtain a straightforward extension of the Theorem 3.1 (which used a condition of E.M. Landesman and A.C. Lazer [12]) provided that the adjoint system (4.1) has a positive p-periodic solution, and give a condition for that to happen.

We consider bounded nonlinear perturbations of linear systems

$$\begin{aligned} x'+A(t)x+f(x)=g(t) . \end{aligned}$$
(4.8)

(The notation is slightly changed compared to Section 3.) Here an \(n \times n\) matrix A(t), and \(g(t) \in R^n\) have continuous p-periodic entries, the unknown vector \(x=x(t) \in R^n\),

$$\begin{aligned} f(x)= \left[ \begin{array}{c} f_1(x) \\ \vdots \\ f_n(x) \end{array} \right] \end{aligned}$$

is a continuous vector function. We assume that the linear part is at resonance, so that both

$$\begin{aligned} x'+A(t)x=0 , \end{aligned}$$
(4.9)

and

$$\begin{aligned} z'-A^T(t)z=0 \end{aligned}$$
(4.10)

have non-trivial p-periodic solutions, and moreover that \(z(t)>0\) componentwise (\(z_i(t)>0\) for all i). The components of the vector f(x) are assumed to satisfy

$$\begin{aligned} \alpha _i<f_i(x)<\beta _i , \; \;\; \;\text{ for } \text{ all }\,x \in R^n,\hbox { and all } i , \end{aligned}$$
(4.11)

with 2n given constants \(\alpha _i, \beta _i\).

Theorem 4.1

Assume that the adjoint system (4.10) has a positive p-periodic solution z(t), and (4.11) holds. Then the system (4.8) may have a p-periodic solution only if

$$\begin{aligned} \sum _{i=1}^n \alpha _i \int _0^p z_i(t) \, dt<\int _0^p g(t) \cdot z(t) \, dt< \sum _{i=1}^n \beta _i \int _0^p z_i(t) \, dt . \end{aligned}$$
(4.12)

In case this condition fails, then all solutions of (4.8) are unbounded as \(t \rightarrow \pm \infty \).

Proof

If x(t) is a p-periodic solution of (4.8), then f(x(t)) is a p-periodic function. Applying Lemma 4.2 obtain

$$\begin{aligned} \int _0^p f(x(t)) \cdot z(t) \, dt=\int _0^p g(t) \cdot z(t) \, dt , \end{aligned}$$

from which (4.12) follows, since \(z(t)>0\).

Assume now that the condition (4.12) fails. Suppose for definiteness that

$$\begin{aligned} \sum _{i=1}^n \alpha _i \int _0^p z_i(t) \, dt \ge \int _0^p g(t) \cdot z(t) \, dt . \end{aligned}$$

Multiply the equation i in (4.8) by \(z_i(t)\), integrate over (0, p) then add up in i. Integrating by parts, and using p-periodicity of z(t) and (4.11), obtain

$$\begin{aligned}{} & {} \sum _{i=1}^n z_i(0) \left[ x_i(p)-x_i(0) \right] =\int _0^p g(t) \cdot z(t) \, dt-\int _0^p f(x) \cdot z(t) \, dt\\{} & {} <\int _0^p g(t) \cdot z(t) \, dt-\sum _{i=1}^n \alpha _i \int _0^p z_i(t) \, dt \le 0 . \end{aligned}$$

So that

$$\begin{aligned} -\sum _{i=1}^n z_i(0) x_i(p) >-\sum _{i=1}^n z_i(0) x_i(0) . \end{aligned}$$

(In effect, we took the scalar product of (4.8) with z, and integrated.) We now apply the Proposition 3.1, with \(X=R^n\), the Poincaré map \(F: x(0) \rightarrow x(p)\), and the functional \(V\left( x(t) \right) =-\sum _{i=1}^n z_i(0) x_i(t)\) to conclude the unboundness of the sequence \(\{x(mp) \}\). \(\square \)

Remark We do not know if the conditions (4.11) and (4.12) are sufficient for the existence of p-periodic solutions of (4.8). As mentioned in A. Boscaggin et al [3], few existence results are known for semilinear periodic systems.

To give a condition for (4.10) to have a positive p-periodic solution we need the following lemma.

Lemma 4.3

Assume that B(t) is a continuous \(n \times n\) matrix with positive off diagonal entries for all \(t>0\), and \(e_i \in R^n\) has the entry i equal to one, and the other entries are zero. Then solution of

$$\begin{aligned} y'=B(t)y , \; \;\; \;y(0)=e_i \end{aligned}$$
(4.13)

satisfies \(y(t)>0\) for all \(t>0\), and all \(i=1,2,\ldots ,n\).

Proof

We can find a constant matrix \(B_0\) with positive off diagonal entries, such that \(B(t)>B_0\) for small t. It is well known that solutions of

$$\begin{aligned} x'=B_0x , \; \;\; \;x(0)=e_i \end{aligned}$$

are positive for all \(t>0\), see e.g., p. 176 in R. Bellman [2]. Since \(y(t)>x(t)\) componentwise, it follows that \(y(t)>0\) for small t. At the first \(t_0\) where \(y_k(t_0)=0\) for some k, there is a contradiction in the k-th equation of (4.13) at \(t=t_0\), since \(y_k'(t_0)>0\) from (4.13). Hence, \(y(t)>0\) for all \(t>0\). \(\square \)

Proposition 4.1

Assume that off diagonal entries of a p-periodic matrix A(t) are positive, and the spectral radius \(\rho (Z(p))=1\). Then (4.10) has a positive p-periodic solution.

Proof

By Lemma 4.3 the fundamental matrix Z(p) of (4.10) has positive entries. By the Perron-Frobenius theorem the largest in absolute value eigenvalue of Z(p) is positive, and since \(\rho (Z(p))=1\), it is \(\lambda =1\), and the corresponding eigenvector \(\xi \) is also positive. Then \(Z(t) \xi \) gives positive p-periodic solution of (4.10). \(\square \)

For the \(2 \times 2\) case we give conditions that appear easier to check.

Proposition 4.2

Assume that A(t) is a p-periodic \(2 \times 2\) matrix with \(a_{12}(t)>0\) and \(a_{21}(t)>0\) for all t, and \(\int _0^p \left[ a_{11}(t)+a_{22}(t) \right] \, dt \le 0\). Finally, assume that Z(p) has an eigenvalue \(\lambda =1\). Then (4.10) has a positive p-periodic solution.

Proof

By Lemma 4.3 the fundamental matrix Z(p) of (4.10) has positive entries. If \(\lambda _1\) and \(\lambda _2\) are the eigenvalues of Z(p), then \(\lambda _1=1\), and by Liouville’s formula, see e.g., p. 212 in [8],

$$\begin{aligned} 0<\textrm{Det } \, X(p) =\lambda _1 \lambda _2=e^{\int _0^p \left[ a_{11}(t)+a_{22}(t) \right] \, dt} \le 1, \end{aligned}$$

so that \(0 \le \lambda _2 \le 1\). By the Perron-Frobenius theorem \(0 \le \lambda _2 < 1\), and the eigenvector \(\xi \) corresponding to \(\lambda _1 =1\) is positive. Then \(Z(t) \xi \) gives a positive p-periodic solution of (4.10). \(\square \)

5 Solution Curves and Unboundness of Solutions

We now consider nonlinear perturbations of a second order periodic problem at resonance

$$\begin{aligned} x''(t)+\lambda x'(t)+g(x)=f(t) , \end{aligned}$$
(5.1)

with \(g(x) \in C^1(R)\), and \(a(t), f(t) \in C(R)\), satisfying \(a(t+p)=a(t)\) and \(f(t+p)=f(t)\) for all t and some \(p>0\), and a constant \(\lambda >0\). This pendulum-like equation was studied previously in a number of papers, including J. Čepička et al [5], G. Tarantello [17], A. Castro [4]. As mentioned above, the linear part of this equation (when \(g(x) \equiv 0\)) is at resonance. Decompose \(f(t)=\mu +e(t)\), with \(\mu \in R\) and \(\int _0^p e(t) \, dt=0\). Similarly, decompose the solution \(x(t)=\xi +X(t)\), with \(\xi \in R\) and \(\int _0^p X(t) \, dt=0\). In view of the above decomposition, we may write (5.1) as

$$\begin{aligned} x''+\lambda x'+g(x)=\mu +e(t) . \end{aligned}$$
(5.2)

The following result we proved in [9].

Theorem 5.1

Assume that \(g(x) \in C^1(R)\) is a bounded function (\(|g(x)| \le M\) for all \(x \in R\) and some \(M>0\)), and

$$\begin{aligned} |g'(x)|<\frac{\lambda ^2}{4} +\omega ^2 , \; \;\text{ for } \text{ all }\,x \in R , \; \;\; \;\text{ where }\,\omega =\frac{2 \pi }{p} . \end{aligned}$$
(5.3)

Then for any \(\xi \in R\) one can find a unique \(\mu \in R\) for which the problem (5.2) has a unique p-periodic solution. Moreover, all p-periodic solutions of (5.2) lie on a unique continuous solution curve \((\mu ,x(t))(\xi )\).

We now give an instability result based on the Landesman–Lazer [12] condition.

Theorem 5.2

In addition to the conditions of the Theorem 5.1 assume that the limits at infinity \(g(\pm \infty )\) exist, and

$$\begin{aligned} g(-\infty )<g(x)<g(\infty ) , \; \;\text{ for } \text{ all }\,x \in R. \end{aligned}$$
(5.4)

Then the equation (5.2) has a p-periodic solution if and only if

$$\begin{aligned} g(-\infty )< \mu <g(\infty ) . \end{aligned}$$
(5.5)

If the condition (5.5) fails, then all of the solutions of (5.2) are unbounded as \(t \rightarrow \infty \), and as \(t \rightarrow -\infty \).

Proof

Let x(t) be a p-periodic solution of (5.4). Integrate the equation (5.2) over (0, p):

$$\begin{aligned} \mu p=\int _0^p g(x(t)) \, dt . \end{aligned}$$
(5.6)

Then the necessity of the condition (5.5) follows by (5.4).

By the Theorem 5.1 there is a continuous solution curve \((\mu ,x(t))(\xi )\) for \(\xi \in R\). Moreover, we showed in [9] that with \(x(t)=\xi +X(t)\), there is a uniform in \(\xi \) and t bound on |X(t)|. It follows from (5.6) that \(\mu \rightarrow g(\infty )\) (\(\mu \rightarrow g(-\infty )\)) as \(\xi \rightarrow \infty \) (\(\xi \rightarrow -\infty \)). By the continuity of the solution curve, it follows that the condition (5.5) is sufficient for the existence of p-periodic solution.

If the condition (5.5) fails, assume for definiteness that

$$\begin{aligned} \mu \ge g(\infty ) . \end{aligned}$$
(5.7)

If x(t) is any solution of (5.2), integration of this equation gives

$$\begin{aligned}&x'(p)-x'(0)+\lambda \left( x(p)-x(0) \right) =\mu p-\int _0^p g(x(t)) \, dt \\&> \mu p-g(\infty ) p \ge 0 , \end{aligned}$$

so that \(x'(p)+\lambda x(p)>x'(0)+\lambda x(0)\). We now apply the Proposition 3.1, with \(X=R^2\), the Poincaré map \(F: \left( x(0),x'(0) \right) \rightarrow \left( x(p),x'(p) \right) \), and the functional \(V\left( x(t),x'(t) \right) =x'(t)+\lambda x(t)\) to conclude the unboundness of the sequence \(\{x(mp),x'(mp) \}\), proving the unboundness of solutions of (5.2). \(\square \)

Similar results hold for first order periodic equations of the type

$$\begin{aligned} x'(t)+g(x)=\mu +e(t) , \end{aligned}$$
(5.8)

with \(\mu \in R\), \(e(t) \in C(R)\), satisfying \(e(t+p)=e(t)\) for all t and some \(p>0\), and \(\int _0^p e(t) \, dt=0\). As above, decompose the p-periodic solutions of (5.8) as \(x(t)=\xi +X(t)\), with \(\xi \in R\) and \(\int _0^p X(t) \, dt=0\). The following lemma allows us to sharpen Theorem 6.1 in [9], and the improved version is used in the proof of Theorem 5.3 below.

Lemma 5.1

Consider a linear periodic problem in the class of functions of zero average

$$\begin{aligned} w'(t)+h(t)w(t)=\mu , \; \;w(t+p)=w(t), \; \;\; \;\int _0^p w(t) \, dt=0 , \end{aligned}$$
(5.9)

where \(h(t) \in C(R)\) is a given function of period p, and \(\mu \) is a parameter. The only solution of (5.9) is \(\mu =0\) and \(w(t) \equiv 0\).

Proof

We claim that w(t) is of one sign. If \(\mu =0\) this follows by the explicit solution. If, say, \(\mu >0\) and w(t) is a sign changing solution, then by the periodicity of w(t) one can find a point \(t_0\) such that \(w(t_0)=0\) and \(w'(t_0) \le 0\), which contradicts the equation (5.9). Since w(t) is of one sign and of zero average, \(w(t) \equiv 0\), and then \(\mu =0\) from the equation (5.9). \(\square \)

Theorem 5.3

Assume that \(g(x) \in C^1(R)\), \(e(t) \in C(R)\) is p-periodic of zero average. Then for any \(\xi \in R\) one can find a unique \(\mu \in R\) for which the problem (5.8) has a unique p-periodic solution. Moreover, all p-periodic solutions of (5.8) lie on a unique continuous solution curve \((\mu ,x(t))(\xi )\).

Proof

Local properties of the solution curve, and the fact that \(\xi \) is a global parameter, were proved in [9]. We show next that \(\mu \) and x(t) are bounded, when \(\xi \) belongs to a bounded set, so that the solution curve can be continued globally, for \(-\infty<\xi <\infty \). With \(x(t)=\xi +X(t)\), obtain

$$\begin{aligned} \; \;\; \;\; \;\; \;X'(t)+g(\xi +X(t))=\mu +e(t) , \; \;X(t+p)=X(t) , \; \;\int _0^p X(t) \, dt=0.\nonumber \\ \end{aligned}$$
(5.10)

Multiply the equation in (5.10) by \(X'\) and integrate over (0, p). By periodicity of X(t)

$$\begin{aligned} \int _0^p {X'}^2(t) \, dt=\int _0^p X'(t) e(t) \, dt , \end{aligned}$$

which gives a bound on \(\int _0^p {X'}^2(t) \, dt\). (If G(u) denotes an antiderivative of g(u), then \(\int _0^p g(\xi +X(t))X' \, dt=G(\xi +X(t))|_0^p=0\).) By Wirtinger’s inequality obtain a bound on \(\int _0^p X^2(t) \, dt\). With X(t) bounded in \(H^1\) norm, conclude a uniform bound on X(t), and hence on x(t), by Sobolev embedding. From (5.8)

$$\begin{aligned} \mu ^2 \le c_0 \left( {X'}^2(t)+g^2(x(t))+e^2(t) \right) , \end{aligned}$$

with some \(c_0>0\). Integrating over (0, p), obtain a bound on \(\mu \). \(\square \)

For the equation (3.9), considered above, we computed the \(\mu =\mu (\xi )\) section of the solution curve described in Theorem 5.3, and then plotted the inverse function \(\xi =\xi (\mu )\) to produce the Fig. 1. Theorem 5.3 can also be used to provide an alternative proof of Theorem 3.1 (with extra information on the solution curve), similarly to Theorem 5.2.

Based on Theorem 5.3 we have the following instability result. Its proof is omitted, since it is similar to that of Theorem 5.2.

Theorem 5.4

In addition to the conditions of the Theorem 5.3 assume that the limits at infinity \(g(\pm \infty )\) exist, and the condition (5.4) holds. Then the equation (5.2) has a p-periodic solution if and only if the condition (5.5) holds. If the condition (5.5) fails, then all of the solutions of (5.8) are unbounded as \(t \rightarrow \infty \).