In [1], Yoshizawa used the Lyapunov function method to develop the boundedness theory of solutions of systems of differential equations. The theory of the boundedness of solutions of systems with respect to part of the variables or, as they say, of the partial boundedness of solutions, was developed on the basis of the Lyapunov function method by Rumyantsev and Oziraner in the monograph [2]. Based on the Lyapunov function method, Lapin (see, e.g., [3]–[5]) developed the theory of the partial boundedness of solutions with partially controlled initial conditions, which is ideologically parallel to the stability theory of a “partial” equilibrium, which was created by Vorotnikov (see, e.g., [6]–[8]). In Krasnosel’skii’s monograph [9] (also see [10]), the canonical domain method and the method of directing functions were developed, which were used in [9] to obtain sufficient conditions for the existence of at least one solution bounded on the whole real line of an arbitrary nonlinear system.

On the other hand, the development of a new direction in the boundedness theory of solutions of systems of differential equations, namely, of the theory of the Poisson boundedness of the set of all solutions, was started in [11]–[15]. The Poisson boundedness of the solution set means that the solutions are not necessarily completely contained in the corresponding balls of the phase space but return countably many times to these balls. The conditions under which the solution set of a system is Poisson bounded were studied in [11]–[15], and the important and interesting problem of studying conditions for the existence of at least one Poisson bounded solution of an arbitrary nonlinear system naturally appeared.

In the present paper, we introduce the notions of Poisson boundedness and Poisson partial boundedness of solutions of a system which, in contrast to those introduced in [11]–[15], do not require the Poisson boundedness and the Poisson partial boundedness of solutions close to a given solution. Further, applying the Lyapunov function method and Krasnosel’skii’s method of canonical domains, we obtain a sufficient condition for the existence of Poisson bounded (in the above sense) solutions of the system. Now we pass to precise definitions and statements.

Suppose given an arbitrary system of differential equations of \(n\) variables:

$$\frac{dx}{dt}=F(t,x),\qquad F(t,x)=(F_1(t,x),\dots,F_n(t,x)) $$
(1)

whose right-hand side is defined in \(\mathbb R^+\times \mathbb R^n\), where \(\mathbb R^+=\{t\in\mathbb R\mid t\ge 0\}\). Suppose also that \(F(t,x)\) is continuous in \((t,x)\) and satisfies the Lipschitz condition in \(x\).

In what follows, \(\|\cdot\|\) denotes the usual Euclidean norm of \(\mathbb R^n\), \(n\ge 1\). For a solution \(x=x(t)\) of system (1) starting from a point \((t_0,x_0)\in\mathbb R^+\times \mathbb R^n\) we use the notation \(x=x(t,t_0,x_0)\). For any \(t_0\in\mathbb R^+\), let \(\mathbb R^+(t_0)\) denote the set \(\{t\in\mathbb R\mid t\ge t_0\}\). Any nonnegative increasing number sequence

$$\tau=\{\tau_i\}_{i\ge 1},\qquad \lim_{i\to\infty}\tau_i=+\infty,$$

is further called a \(\mathscr P\)-sequence. For each \(\mathscr P\)-sequence \(\tau=\{\tau_i\}_{i\ge 1}\), we let \(M(\tau)\) denote the set \(\bigcup_{i=1}^\infty[\tau_{2i-1};\tau_{2i}]\). For an arbitrary function \(V(t,x)\), we let \(V_{F(t,x)}'^+(t,x)\) denote the limit

$$V_{F(t,x)}'^+(t,x)=\lim_{\alpha\to+0} \biggl(\sup_{h\in(0;\alpha]} \frac{V(t+h,x+F(t,x)h)-V(t,x)}{h}\biggr),$$

which is called [1] the upper Dini derivative of the function \(V(t,x)\) subject to system (1). We note that if the function \(V(t,x)\) has a continuous partial derivative with respect to the variables \(t\) and \(x\), then \(V_{F(t,x)}'^+(t,x)\) coincides with the usual derivative \(\dot V(t,x)\) of the function \(V(t,x)\) subject to (1).

Recall [1] that a solution \(x=x(t,t_0,x_0)\) of system (1) is said to be bounded if, for this solution, there exists a number \(\beta>0\) such that the condition \(\|x(t,t_0,x_0)\|\le\beta\) is satisfied for all \(t\in\mathbb R^+(t_0)\).

FormalPara Definition 1.

A solution \(x=x(t,t_0,x_0)\) of system (1) is said to be Poisson bounded if, for such a solution, there exists a \(\mathscr P\)-sequence \(\tau=\{\tau_i\}_{i\ge 1}\), where \(t_0\in M(\tau)\), and a number \(\beta>0\) such that the condition \(\|x(t,t_0,x_0)\|\le\beta\) is satisfied for all \(t\in R^+(t_0)\cap M(\tau)\).

It is clear that if a solution of Eq. (1) is bounded, then this solution is also Poisson bounded.

In the geometric language, Definition 1 means that the solution starting at a certain time from a ball of radius \(\beta>0\) centered at the origin of the coordinate system will countably many times return to this ball. It is clear that if a solution of system (1) is bounded, then this solution is also Poisson bounded.

Further, for each \(x=(x_1,\dots,x_n)\in\mathbb R^n\), \(n\ge 2\), and any fixed \(1\le k<n\), we shall use the notation \(x=(y,z)\), where \(y=(x_1,\dots,x_k)\in\mathbb R^k\) and \(z= (x_{k+1},\dots,x_n)\in\mathbb R^{n-k}\).

Now, following [2], we recall that a solution \(x(t,t_0,x_0)\) of system (1) is said to be \(y\)-bounded if, for this solution, there exists a number \(\beta>0\) such that the condition \(\|y(t,t_0,x_0)\|\le\beta\) is satisfied for all \(t\in \mathbb R^+(t_0)\). We also recall [2] that a solution \(x(t)\) of system (1) is said to be \(y\)-extendable to the whole half-line \(\mathbb R^+\) if the vector function \(y(t)\) is defined for all \(t\ge 0\). The notions of a \(z\)-bounded solution of system (1) and of a solution \(z\)-extendable to the whole half-line \(\mathbb R^+\) are defined similarly.

FormalPara Definition 2.

A solution \(x=x(t,t_0,x_0)\) of system (1) is said to be Poisson \(y\)-bounded if, for this solution, there exists a \(\mathscr P\)-sequence \(\tau=\{\tau_i\}_{i\ge 1}\), \(t_0\in M(\tau)\), and a number \(\beta>0\) such that the condition \(\|y(t,t_0,x_0)\|\le\beta\) is satisfied for all \(t\in R^+(t_0)\cap M(\tau)\). The notion of a Poisson \(z\)-bounded solution of system (1) is defined similarly.

Further, following [9], we say that a compact subset \(\Omega\subset\mathbb R^k\) with nonempty interior is a canonical domain in \(\mathbb R^k\) if the following conditions are satisfied:

  1. (1)

    \(\Omega\) is defined by finitely many inequalities

    $$G_i(y)\le 0,\quad y\in\mathbb R^k,\qquad 1\le i\le r, $$
    (2)

    where the functions \(G_i(y)\) are continuously differentiable;

  2. (2)

    if \(G_{i_0}(y_0)=0\) at a point \(y_0\) of the boundary \(\partial\Omega\) of \(\Omega\), then \(\operatorname{grad}G_{i_0}(y_0)\ne 0\).

It should be noted that, in contrast to [9], a canonical domain \(\Omega\) is not required to be convex, because here we do not consider questions of the existence of periodic solutions.

For any canonical domain \(\Omega\) in \(\mathbb R^k\) and each point \(y\in\partial\Omega\), we let \(\alpha(y)\) denote the set of indices \(i\) such that the condition \(G_i(y)=0\) is satisfied. Moreover, for the right-hand side \(F(t,x)\) of system (1) and a fixed positive integer \(k<n\), we let \(M(t,x)\) denote the mapping \(M(t,x)=(F_1(t,x),\dots,F_k(t,x))^T\).

Now, we formulate and prove the following sufficient condition for the existence of Poisson bounded solutions of system (1) in terms of canonical domains and Lyapunov functions.

FormalPara Theorem.

Let \(\Omega\) be a canonical domain in \(\mathbb R^k\) defined by inequalities (2), and let solutions of system (1) be \(z\)-extendable to the whole half-line \(\mathbb R^+\). Suppose also that the following conditions are satisfied for system (1) :

  1. (1)

    the mapping \(M(t,x)\) defined by the right-hand side \(F(t,x)\) of system (1) satisfies the inequality

    $$(\operatorname{grad}G_i(y),M(t,x))\le 0 $$
    (3)

    for any \(t\in\mathbb R^+\), \(x=(y,z)\in\mathbb R^n\), \(y\in\partial\Omega\), and \(i\in\alpha(y)\) ;

  2. (2)

    there exists a \(\mathscr P\)-sequence \(\tau=\{\tau_i\}_{i\ge 1}\), a nonincreasing function \(b(r)\ge 0\), \(r\in\mathbb R^+\), such that \(b(r)\to+\infty\) as \(r\to+\infty\), and a function \(V(t,x)\ge 0\) defined on \(\mathbb R^+(\tau_1)\times(\Omega\times \mathbb R^{n-k})\) which satisfy the following conditions :

    $$b(\|z\|) \le V(t,x) \qquad \textit{for all}\quad (t,x)\in M(\tau)\times(\Omega\times \mathbb R^{n-k}),$$
    (4)
    $$V_{F(t,x)}'^+(t,x) \le 0 \qquad \textit{for all}\quad (t,x)\in\mathbb R^+(\tau_1)\times(\Omega\times \mathbb R^{n-k}).$$
    (5)

Then each solution \(x(t,t_0,x_0)\) of system (1), where \((t_0,x_0)\in M(\tau)\times(\Omega\times \mathbb R^{n-k})\), is Poisson bounded.

FormalPara Proof.

First, we show that any solution \(x(t,t_0,x_0)\) of system (1), where \((t_0,x_0)\in\mathbb R^+\times(\Omega\times \mathbb R^{n-k})\), is \(y\)-bounded. For system (1), we consider the system

$$\frac{dx}{dt}=F(t,x)+\gamma\cdot(s_0-x) $$
(6)

with parameter \(\gamma>0\), where \(s_0=(p_0,q_0)\in\mathbb R^k\times \mathbb R^{n-k}\) is a fixed point for which \(p_0\) is an interior point of \(\Omega\). The geometrically obvious inequality \((\operatorname{grad}G_i(y),p_0-y)<0\), \(y\in\partial\Omega\), \(i\in\alpha(y)\), and condition (3) imply that the right-hand side of system (6) satisfies the condition

$$(\operatorname{grad}G_i(y),M(t,x)+\gamma\cdot(p_0-y))<0 $$
(7)

for all \(t\in\mathbb R^+\), \(x\in\mathbb R^n\), \(y\in\partial\Omega\), and \(i\in\alpha(y)\). Now we choose an arbitrary point \(x_0\in\Omega\times \mathbb R^{n-k}\). For each fixed \(\gamma>0\), we consider the solution \(x_\gamma(t,t_0,x_0)\) of system (6), where \(t_0\ge 0\), and show that \(y_\gamma(t,t_0,x_0)\in\Omega\) for all \(t\ge t_0\). Assume that, on the contrary, for a vector function \(y_\gamma(t, t_0,x_0)\), there exists a number \(t'_\gamma>t_0\) such that \(y_\gamma(t'_\gamma,t_0,x_0)\notin\Omega\). Since the vector function \(y_\gamma(t,t_0,x_0)\) is continuous in \(t\) and \(\Omega\) is a compact set, we see that there is a \(t_0\le\overline t_\gamma<t'_\gamma\) such that \(y_\gamma(\overline t_\gamma,t_0,x_0)\in\Omega\) and \(y_\gamma(t,t_0,x_0)\notin\Omega\) for \(t>\overline t_\gamma\) sufficiently close to \(\overline t_\gamma\). It is clear that \(y_\gamma(\overline t_\gamma,t_0,x_0)\in\partial\Omega\) and hence

$$\begin{alignedat}{2} G_i(y_\gamma(\overline t_\gamma,t_0,x_0))&=0&\qquad &\text{for}\quad i\in \alpha(y_\gamma(\overline t_\gamma,t_0,x_0)), \\ G_i(y_\gamma(\overline t_\gamma,t_0,x_0))&<0&\qquad &\text{for}\quad i\notin\alpha(y_\gamma(\overline t_\gamma,t_0,x_0)). \end{alignedat}$$

Using condition (7), we obtain the inequality

$$\frac{G_i(y_\gamma(t,t_0,x_0))}{dt}\biggr|_{t=\overline t_\gamma}<0,\qquad i\in\alpha(y_\gamma(\overline t_\gamma,t_0,x_0)),$$

which implies \(G_i(y_\gamma(t,t_0,x_0))<0\), \(i\in\alpha(y_\gamma(\overline t_\gamma,t_0,x_0))\), for \(t>\overline t_\gamma\) sufficiently close to \(\overline t_\gamma\). For such \(t>\overline t_\gamma\), we can assume that \(G_i(y_\gamma(t,t_0,x_0))<0\) for \(i\notin\alpha(y_{m,\gamma}(\overline t_\gamma,t_0,x_0))\), because the functions \(G_i(y)\) are continuous. Thus, for \(t>\overline t_\gamma\) sufficiently close to \(\overline t_\gamma\), the inequalities \(G_i(y_\gamma(t,t_0,x_0))<0\) hold for all \(1\le i\le r\), i.e., \(y_\gamma(t,t_0,x_0)\in\Omega\). This contradicts the fact that \(y_\gamma(t,t_0,x_0)\notin\Omega\) for \(t>\overline t_\gamma\) sufficiently close to \(\overline t_\gamma\). Therefore, the above assumption is false, and hence the solution \(x_\gamma(t,t_0,x_0)\) of (6) under consideration satisfies the condition \(y_\gamma(t,t_0,x_0)\in\Omega\) for all \(t\ge t_0\).

Now we consider system (6) for \(\gamma\ge 0\) and a solution \(x_0(t,t_0,x_0)\) of this system for \(\gamma=0\), i.e., a solution of system (1), which we denote by \(x(t,t_0,x_0)\) in what follows. We shall show that, for all \(t\ge t_0\), the condition \(y(t,t_0,x_0)\in\Omega\) is satisfied. For this purpose, we choose an arbitrary number sequence \((\gamma_i>0)_{i\ge 1}\) converging to zero. Since the conditions of the theorem on the continuous dependence of solutions on a parameter (see, e.g., [16]) are satisfied for system (6) whose right-hand side is considered with parameter \(\gamma\ge 0\), it follows that the sequence \((x_{\gamma_i}(t,t_0,x_0))_{i\ge 1}\) of points in \(\mathbb R^n\) converges to the point \(x(t,t_0,x_0)\in\mathbb R^n\) for each fixed number \(t\ge t_0\). In particular, for each fixed number \(t\ge 0\), the sequence \((y_{\gamma_i}(t,t_0,x_0))_{i\ge 1}\) of points in \(\mathbb R^k\) converges to the point \(y(t,t_0,x_0)\in\mathbb R^k\). This implies that the condition \(y(t,t_0,x_0)\in\Omega\) is satisfied for the solution \(x(t,t_0,x_0)\) of system (1) for all \(t\ge t_0\). Indeed, assume that, on the contrary, the condition \(y(\tau,t_0,x_0)\notin\Omega\) is satisfied for some \(\tau>t_0\). Since the set \(\Omega\) is convex and the sequence \((y_{\gamma_i}(\tau,t_0,x_0))_{i\ge 1}\) of points in \(\mathbb R^k\) converges to the point \(y(\tau,t_0,x_0)\notin\Omega\), we have \(y_{\gamma_i}(\tau,t_0,x_0)\not\in\Omega\) for sufficiently large \(i\). This contradicts the fact that \(y_{\gamma_i}(\tau,t_0,x_0)\in\Omega\) for all \(i\ge 1\). Therefore, the above assumption is false, and hence the solution \(x(t,t_0,x_0)\) of system (1) satisfies the condition \(y(t,t_0,x_0)\in\Omega\) for all \(t\ge t_0\). The fact that the solutions of system (1) are \(z\)-extendable implies that the solution \(x(t,t_0,x_0)\) is defined for all \(t\ge t_0\). Moreover, since \(y(t,t_0,x_0)\in\Omega\) for any \(t\ge t_0\), it follows that \(x(t,t_0,x_0)\) is a \(y\)-bounded solution. Indeed, since the set \(\Omega\) is compact in \(\mathbb R^k\), it follows that there exists a ball of radius \(\beta>0\) centered at the origin of \(\mathbb R^k\) such that \(\Omega\) is contained in this ball, and hence the inequality \(\|y(t,t_0,x_0)\|\le\beta\) holds for all \(t\ge t_0\). Thus, we have shown that any solution \(x(t,t_0,x_0)\) of system (1), where \((t_0,x_0)\in\mathbb R^+\times(\Omega\times \mathbb R^{n-k})\), is \(y\)-bounded.

Now we show that any solution \(x(t,t_0,x_0)\), where \((t_0,x_0)\in M(\tau)\times(\Omega\times \mathbb R^{n-k})\), of system (1) is Poisson \(z\)-bounded. Indeed, for any such solution \(x(t,t_0,x_0)\), using requirement (4) in condition (2) of the theorem, we obtain the inequality

$$b(\|z(t,t_0,x_0)\|)\le V(t,x(t,t_0,x_0))\qquad\text{for all}\quad t\in R^+(t_0)\cap M(\tau).$$

Since it has previously been shown that the solution \(x=x(t,x_0,t_0)\) is \(y\)-bounded, it follows from requirement (5) in condition (2) of the theorem that the function \(V(t,x(t,x_0,t_0))\) of the variable \(t\) is nonincreasing for this solution. This implies that the inequality \(V(t,x(t,x_0,t_0))\le V(t_0,x_0)\) holds for all \(t\in R^+(t_0)\cap M(\tau)\). Now, using the condition that \(b(r)\to\infty\) as \(r\to\infty\), we choose a number \(\beta>0\) for which \(V(t_0,x_0)\le b(\beta)\). This implies the inequality

$$b(\|z(t,x_0,t_0)\|)\le b(\beta)\qquad\text{for}\quad t\in R^+(t_0)\cap M(\tau).$$

Since \(b(r)\) is nonincreasing, it follows from the last inequality that

$$\|z(t,x_0,t_0)\|\le\beta \qquad\text{for all}\quad t\in R^+(t_0)\cap M(\tau).$$

Thus, we have shown that any solution \(x(t,t_0,x_0)\) of system (1), where \((t_0,x_0)\in M(\tau)\times(\Omega\times \mathbb R^{n-k})\), is Poisson \(z\)-bounded.

Now we note that the number \(\beta>0\) in the proof of the \(z\)-boundedness of the solution \(x(t,t_0,x_0)\) can be chosen arbitrarily large. Taking into account the Poisson \(z\)-boundedness and the \(y\)-boundedness of the solution \(x(t,t_0,x_0)\) and using the obvious inequality \(\|x\|\le\|y\|+\|z\|\), we see that any solution \(x(t,t_0,x_0)\) of system (1), where \((t_0,x_0)\in M(\tau)\times(\Omega\times \mathbb R^{n-k})\), is Poisson bounded.

In conclusion, we note that one might introduce the notion of a solution Poisson bounded on the whole real line \(\mathbb R\). However, the above-proposed method for constructing Poisson bounded solutions cannot generally be used to construct Poisson bounded solutions on the whole real line, because, in such constructions, sequences of points in \((\Omega\times \mathbb R^{n-k})\) that may be unbounded in part of the variables \(z\) arise.