1 Introduction

A second order differential equation is generally referred to as a Lienard equation(named after the French physicist Alfred-Marie Lienard) in dynamical system and differential equation(see, [13, 14, 18, 20]). Analysis of qualitative properties of solutions of ordinary and delay differential equations has received considerable attention of many notable researchers and experts in the last few decades of research (see, [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34]). In analyzing the qualitative properties, the direct method of Lyapunov or Lypunov–Krasovskii method has been found to be very useful. The method requires construction of a suitable scalar function known as Lyapunov or Lypunov–Krasovskii functional which together with its time derivative satisfies certain conditions. However, to construct such functional is tedious especially when it comes to non-linear differential equations.

In 2013, Tunc [21] employed the Lypunov–Krasovskii method to establish some necessary conditions for a stable trivial solution (when \(P(t) \equiv 0\)) and boundedness of solutions (when \(P(t) \ne 0\)) of the equation:

$$\begin{aligned} X^{\prime \prime }(t) + F(X(t), X^{\prime }(t))X^{\prime }(t) + H(X(t - \tau )) = P(t), \end{aligned}$$

where \(\tau > 0\) is a delay constant. Later, Omeike et al. [16], studied the asymptotic stability and uniform ultimate boundedness of solutions of the differential equation:

$$\begin{aligned} X^{\prime \prime } + AX^{\prime } + H(X(t - r(t))) = P(t, X, X^{\prime } ), \end{aligned}$$

where A is a real \(n \times n\) constant, symmetric, positive definite matrix.

In a recent paper, Tunc and Tunc [22] both established some interesting results on the stability, boundedness and square integrability of solutions to the equation:

$$\begin{aligned} X^{\prime \prime } + F(X, X^{\prime })X^{\prime } + H(X(t - r(t))) = P(t, X, X^{\prime }), \end{aligned}$$
(1.1)

where \( X, Y : {\mathbb {R}}^+ \rightarrow {\mathbb {R}}^n\) ,\( {\mathbb {R}} = ( -\infty , \infty ), ~{\mathbb {R}}^+ = [ 0, \infty )\); \( H: {\mathbb {R}}^n \rightarrow {\mathbb {R}}^n \) is a continuously differentiable function, \(H(0) = 0\); \( P: {\mathbb {R}}^+ \times {\mathbb {R}}^n \times {\mathbb {R}}^n \rightarrow {\mathbb {R}}^n \)is a continuous function; F is an \( n \times n \) continuous symmetric, positive definite matrix function dependent on the arguments displayed explicitly, and the prime(\(^{\prime }\)) indicate differentiation with respect to variable t. For any two vectors X,  Y in \({\mathbb {R}}^n\), the symbol \(\langle X,Y \rangle \) is used to denote the usual scalar product in \({\mathbb {R}}^n\), i.e. \( \langle X, Y \rangle = \sum _{i=1}^{n}x_iy_i\), where \(x_1, x_2,...,x_n\) and \(y_1, y_2,...,y_n\) are the components of the vectors X and Y respectively; therefore, \(\parallel X \parallel ^2 = \langle X,X \rangle \).

In view of the works of Tunc [21], Tunc and Tunc [22], Omeike et al. [16] and some other works in the references, we are motivated to examine certain conditions that guarantee the uniform-ultimate boundedness of solutions to the Eq.(1.1). Based on our understanding of literature, the uniform-ultimate boundedness of solutions of the Eq. (1.1) has not been discussed by any author.

Let \(X^{\prime } = Y\), then Eq. (1.1) can be written as a system of first order differential equations given below:

$$\begin{aligned} \begin{aligned} X^{\prime }=~&Y,\\ Y^{\prime }=~&- F(X,Y )Y - H(X) + \int _{t - r(t)}^{t}J_H(X(s))Y(s)ds + P(t,X,Y), \end{aligned} \end{aligned}$$
(1.2)

where the \(J_h(X)\) in the system (1.2) stands for the Jacobian matrix of vector H(X) and is defined by

$$\begin{aligned} J_H(X) = \Bigg ( \frac{\partial h_i}{\partial x_j }\Bigg ), (i,j = 1, 2,3,...n), \end{aligned}$$

where \((x_1, x_2,..., x_n)\) and \((h_1, h_2, h_3,...,h_n)\) are respectively the components of the vectors X and H.

2 Preliminary results

The following algebraic results and definitions are necessary to prove our main result. The proofs of the results are found in the following papers ([7, 8, 11, 12, 16, 22]).

Lemma 2.1

[7, 8, 16, 22] Let A be any real symmetric positive definite \(n \times n\) matrix, then for any X in \({\mathbb {R}}^n\), we have

$$\begin{aligned} \delta _a \parallel X \parallel ^2 \le \langle AX, X \rangle \le \Delta _a \parallel X \parallel ^2, \end{aligned}$$

where \(\delta _a\) and \(\Delta _a\) are respectively the least and greatest eigenvalues of A.

Lemma 2.2

[7, 8, 16, 22] Let H(X) be a continuous vector function and that \(H(0) = 0\), then

$$\begin{aligned} \frac{d}{dt}\int _0^1 \langle H(\sigma X), Y \rangle d \sigma = \langle H(X), Y \rangle . \end{aligned}$$

Lemma 2.3

[7, 8, 16, 22] Let H(X) be a continuous vector function and that \(H(0) = 0\), then

$$\begin{aligned} \delta _h \parallel X \parallel ^2 \le 2 \int _0^1 \langle H(\sigma X), X \rangle d \sigma \le \Delta _h \parallel X \parallel ^2, \end{aligned}$$

where \(\delta _h\) and \(\Delta _h\) are respectively the least and greatest eigenvalues of \(J_h(\sigma X)\).

Consider the following non-autonomous delay differential equation

$$\begin{aligned} x^{ \prime } = F(t, x_t), ~ x_t = x(t + \theta ), ~ -r \le \theta \le 0, \end{aligned}$$
(2.1)

where \(F : {\mathbb {R}} \times C \rightarrow {\mathbb {R}}^n\) is a continuous mapping, \(F(t,0) = 0,\) and given that F takes closed bounded sets into bounded sets of \({\mathbb {R}}^n\), and \(C=C([-r,0],\text { }{{{\mathbb {R}}}^{n}})\) and \(\phi \in C\). We assume that \({{a}_{0}}\ge 0\), \(t\ge {{t}_{0}}\ge 0\) and \(x\in C([{{t}_{0}}-\gamma ,{{t}_{0}}+{{a}_{0}}],\text { }{{{\mathbb {R}}}^{n}})\). Suppose that \({{x}_{t}}=x(t+\theta )\) for \(-r\le \theta \le 0\) and \(x(t)=\phi (t),\text { }t\in \left[ -\gamma ,0 \right] ,\text { }\gamma >0.\)

Definition 2.1

[16] The matrix A is said to be positive definite when \(\langle AX, X \rangle > 0\) for all non-zero X in \( {\mathbb {R}}^n \).

Definition 2.2

[9, 22] A continuous function \(W: {\mathbb {R}}^n \rightarrow {\mathbb {R}}^+\) with \(W(0) = 0, W(s) > 0,\) and W strictly increasing is a wedge. (It is denoted by W or \(W_j\), where j is an integer.)

Definition 2.3

[9, 21] Let D be an open set in \({\mathbb {R}}^n\) with \(0 \in D\). A function \(V:[0, \infty ) \times D \rightarrow [0, \infty )\) is called positive definite if \(V(t, 0) = 0\) and if there is a wedge \(W_1\) with \(V(t, x) \ge W_1(|x|),\) and is called a decrescent function if there is a wedge \(W_2\) with \(V(t,x) \le W_2(|x|).\)

Definition 2.4

[28] The solutions of equation (2.1) are uniformly ultimately bounded for bound M, if there exists an \( M > 0 \) and if for any \( \alpha > 0\) and \( t_0 \in I\) there exists a \(T( \alpha ) > 0 \) such that \( X_0 \in S_{ \alpha }, \) where \( S_{\alpha } = \{ x \in {\mathbb {R}}^n : \parallel x \parallel < \alpha \},\) implies that

$$\begin{aligned} \Vert X(t; t_0, X_0) \Vert < M \end{aligned}$$

for all \( t \ge t_0 + T( \alpha ).\)

Lemma 2.4

[9, 16, 21, 26] Let \(V(t, \phi ) : {\mathbb {R}} \times C \rightarrow {\mathbb {R}}\) be continuous and locally Lipschitz in \(\phi \). We assume that the following conditions hold: \( (i)~ W(|x(t)|) \le V(t, x_t) \le W_1(|x(t)|) + W_2 \big ( \int _{t -r(t)}^{t} W_3( |x(s)|)ds \big )\) and \((ii) {\dot{V}}_{(2.1) } \le -W_3(|x(s)|) + M, ~ for ~ some~ M > 0,\) where \( W_i ~ (i =1,2,3) \) are wedges and \({\dot{V}}_{(2.1) }\) represents the derivative of the functional \(V(t, \phi )\) with respect to the independent variable t along the solution path of (2.1). Then the solutions of (2.1) are uniformly bounded and uniformly ultimately bounded for bound \(\textbf{B}\).

Remark 2.1

The qualitative properties of the solutions of (2.1) can be studied by means of a scalar functional \(V(t, \phi )\) called Lypunov–Krasovskii functional as contained in Lemma 2.5.

3 Main result

Theorem 3.1

Further to the basic assumptions placed on functions F and G that appear in Eq. (1.1) or system (1.2), we assume there exist some positive constants \(D_0, D_1, \delta _f, \delta _h, \Delta _f, \Delta _h, \epsilon , \alpha \) and \(\xi \) such that the following conditions hold:

  1. (i)

    \(H(0) = 0, H(X) \ne 0, (X \ne 0) \), the matrix \(J_H(X)\) exists, symmetric and positive definite such that for all \(X \in {\mathbb {R}}^n\), \(\delta _h \le \lambda _i(J_H(X)) \le \Delta _h;\) \(\lambda _i(J_H(X))\) being the eigenvalues of \(J_H(X).\)

  2. (ii)

    The eigenvalues \(\lambda _i(J_F(X, Y))\) of F(XY) satisfies \(\delta _f = \alpha - \epsilon \le \lambda _i(J_F(X, Y)) \le \alpha .\)

  3. (iii)

    \(0 \le r(t) \le \gamma \), \(\gamma \) is a positive constant, \(r^{\prime }(t) \le \xi , 0< \xi < 1.\)

  4. (iv)

    \(\parallel P(t, X, Y) \parallel \le D_0 + D_1\{ \parallel X \parallel + \parallel Y \parallel \}.\) Then, the solutions of system (1.2) are uniformly ultimately bounded whenever

    $$\begin{aligned} 0< \gamma < \min \Big ( \frac{2 \delta _h - \epsilon }{\Delta _h}, \frac{(1 - \xi )( 2 \alpha - \epsilon (\alpha + 4) )}{\Delta _h\big (2(2 - \xi ) + \alpha \big )}\Big ). \end{aligned}$$

Proof

Let a continuously differentiable Lypunov–Krasovskii functional \(V(t) = V(X(t), Y(t))\) be defined by

$$\begin{aligned} 2V(t){} & {} = \parallel \alpha X + Y \parallel ^2 + 4 \int _{0}^{1}\langle H(\sigma _1 X), X\rangle d \sigma _1 + \parallel Y \parallel ^2 \nonumber \\ {}{} & {} \quad + 2 \lambda \int _{-r(t)}^{0}\int _{t + s}^{t}\langle Y(\theta ), Y(\theta )\rangle d \theta d s, \end{aligned}$$
(3.1)

where \(\lambda > 0\) and its value is given later.

Our first concern is to establish that the functional V(t) defined by (3.1) is nonnegative. Obviously, \(V(0, 0) = 0.\) By Lemma 2.3 and assumption (i) of the theorem, we have

$$\begin{aligned} 2\delta _h\parallel X \parallel ^2 \le 4 \int _{0}^{1}\langle H(\sigma _1 X), X\rangle d \sigma _1 \le 2\Delta _h \parallel X \parallel ^2. \end{aligned}$$
(3.2)

Also, by using the inequality \( 2 | \langle X, Y \rangle | \le \parallel X \parallel ^2 + \parallel Y \parallel ^2,\) we have

$$\begin{aligned} 0 \le \parallel \alpha X + Y \parallel ^2 \le 2\{ \alpha ^2\parallel X \parallel ^2 + \parallel Y \parallel ^2 \}. \end{aligned}$$
(3.3)

Lastly,

$$\begin{aligned} 0 < \lambda \int _{-r(t)}^{0}\int _{t + s}^{t}\langle Y(\theta ), Y(\theta )\rangle d \theta d s \end{aligned}$$
(3.4)

Thus, using the estimates (3.2)–(3.4) in (3.1) we have

$$\begin{aligned}\begin{aligned} 2V(t)&\ge 2\delta _h\parallel X \parallel ^2 + \parallel Y \parallel ^2\\ {}&= D_2\{\parallel X \parallel ^2 + \parallel Y \parallel ^2\}, \end{aligned} \end{aligned}$$

where \(D_2 = \min \{ 2\delta _h, 1\}\).

Similarly, by the same reasoning as above, we have

$$\begin{aligned}\begin{aligned} 2V(t) \le 2\big (\Delta _h + \alpha ^2\big )\parallel X \parallel ^2 + 3 \parallel Y \parallel ^2 + 2 \lambda \int _{-r(t)}^{0}\int _{t + s}^{t}\langle Y(\theta ), Y(\theta )\rangle d \theta d s\\ \le D_3 \{\parallel X \parallel ^2 + \parallel Y \parallel ^2 \} + 2\lambda r(t) \int _{t - r(t)}^{t}\langle Y(\theta ), Y(\theta )\rangle d \theta , \end{aligned} \end{aligned}$$

where \(D_3 = \max \{2(\Delta _h + \alpha ^2), 3\}\). Hence, we can get a continuous function, say v(s), such that

$$\begin{aligned} v(\parallel \psi (0)\parallel ) \le V(\psi ), v(\parallel \psi (0)\parallel ) \ge 0. \end{aligned}$$

Next, we obtain the derivative \({\dot{V}}(t)\) of V(t) with respect to the independent variable t along the system (1.2) as follows:

$$\begin{aligned} \frac{d}{dt}V(t) = {\dot{V}}(t)= & {} \langle \alpha X + Y, \alpha Y - F(X,Y)Y - H(X) \\{} & {} + \int _{t-r(t)}^{t} J_H(X(s))Y(s) d s + P(t, X, Y) \rangle \\{} & {} + 2 \frac{d}{dt}\int _{0}^{1}\langle H(\sigma _1 X), X\rangle d \sigma _1 + \langle Y, - F(X,Y)Y - H(X) \\{} & {} + \int _{t-r(t)}^{t} J_H(X(s))Y(s) d s \\{} & {} + P(t, X, Y) \rangle + \lambda \frac{d}{dt} \int _{-r(t)}^{0}\int _{t+s}^{t}\langle Y(\theta ),Y(\theta )\rangle d \theta d s. \end{aligned}$$

By Lemma 2.2, we have

$$\begin{aligned} \frac{d}{dt}\int _{0}^{1}\langle H(\sigma _1 X), X\rangle d \sigma _1 = \langle H(X), Y\rangle . \end{aligned}$$

Also,

$$\begin{aligned}\begin{aligned}&\lambda \frac{d}{dt} \int _{-r(t)}^{0}\int _{t+s}^{t}\langle Y(\theta ),Y(\theta )\rangle d \theta d s \\ {}&\quad = \lambda r(t)\parallel Y(t) \parallel ^2 - \lambda (1 -r^{\prime }(t))\int _{t-r(t)}^{t}\parallel Y(\theta ) \parallel ^2d \theta \\ {}&\quad \le \lambda \gamma \parallel Y(t) \parallel ^2 - \lambda (1 - \xi )\int _{t-r(t)}^{t}\parallel Y(\theta ) \parallel ^2d \theta , \end{aligned} \end{aligned}$$

after we have applied assumption (iii) of our theorem.

Therefore, after simplification and arranging terms, we obtain

$$\begin{aligned}\begin{aligned} {\dot{V}}(t) =&- \alpha \langle X, H(X) \rangle - 2 \langle Y, F(X,Y)Y \rangle + \alpha \langle Y, Y \rangle + \alpha \langle X, \big (\alpha I - F(X,Y)\big )Y\rangle \\ {}&+\alpha \int _{t-r(t)}^{t}\langle X, J_H(X(s))Y(s)\rangle d s + 2 \int _{t-r(t)}^{t}\langle Y, J_H(X(s))Y(s)\rangle d s\\ {}&+ \lambda r(t)\parallel Y(t) \parallel ^2 - \lambda (1 -r^{\prime }(t))\int _{t-r(t)}^{t}\parallel Y(\theta ) \parallel ^2d \theta + \langle \alpha X + 2Y, P(t, X, Y)\rangle \\ \le&- \alpha \langle X, H(X) \rangle - 2 \langle Y, F(X,Y)Y \rangle + \alpha \langle Y, Y \rangle + \alpha \langle X,\big (\alpha I - F(X,Y)\big ) Y\rangle \\ {}&+\alpha \int _{t-r(t)}^{t}\langle X, J_H(X(s))Y(s)\rangle d s + 2 \int _{t-r(t)}^{t}\langle Y, J_H(X(s))Y(s)\rangle d s\\ {}&+\lambda \gamma \parallel Y(t) \parallel ^2 - \lambda (1 - \xi )\int _{t-r(t)}^{t}\parallel Y(\theta ) \parallel ^2d \theta + \langle \alpha X + 2 Y, P(t, X, Y)\rangle , \end{aligned} \end{aligned}$$

where I is an \(n \times n\) identity matrix.

If we apply Lemma 2.1, assumptions (i), (ii) of the theorem and the fact that \(2 \parallel X \parallel \parallel Y \parallel \le \parallel X \parallel ^2 + \parallel Y \parallel ^2\) in the above, we obtain

$$\begin{aligned} \begin{aligned} {\dot{V}}(t) \le&-\frac{\alpha }{2}\big ( 2\delta _h - \epsilon -\Delta _h \gamma \big )\parallel X(t) \parallel ^2 \\ {}&- \frac{1}{2}\big ( 2 \alpha - \epsilon ( \alpha + 4) - 2 \gamma (\Delta _h + \lambda )\big ) \parallel Y(t) \parallel ^2\\ {}&+ \frac{1}{2}\big ( (2+ \alpha )\Delta _h - 2\lambda (1 - \xi )\big )\int _{t-r(t)}^{t}\parallel Y(\theta )\parallel ^2 d \theta \\ {}&+ \parallel P(t,X,Y)\parallel \big ( \alpha \parallel X \parallel + 2\parallel Y \parallel \big ). \end{aligned} \end{aligned}$$
(3.5)

On setting \( \lambda = \frac{\Delta _h(\alpha + 2)}{2(1- \xi )}\) , \(\gamma < \min \Big ( \frac{2 \delta _h - \epsilon }{\Delta _h}, \frac{(1 - \xi )( 2 \alpha - \epsilon (\alpha + 4) )}{\Delta _h\big (2(2 - \xi ) + \alpha \big )}\Big )\) and using assumption (iv) of the theorem in (3.5), we obtain the following inequality for some positive constant \(K_1,\)

$$\begin{aligned}\begin{aligned} {\dot{V}}(t)&\le - K_1\{ \parallel X \parallel ^2 + \parallel Y \parallel ^2\} + \big ( D_0 + D_1(\parallel X \parallel + \parallel Y \parallel )\big )\big ( \alpha \parallel X \parallel + 2 \parallel Y \parallel \big )\\ {}&\le - K_1\{ \parallel X \parallel ^2 + \parallel Y \parallel ^2\} + D_0\big ( \alpha \parallel X \parallel + 2 \parallel Y \parallel \big ) \\ {}&\quad + D_1(\parallel X \parallel + \parallel Y \parallel )\big ( \alpha \parallel X \parallel + 2 \parallel Y \parallel \big ). \end{aligned} \end{aligned}$$

By simplifying further and using the inequality \(2 \parallel X \parallel \parallel Y \parallel \le \parallel X \parallel ^2 + \parallel Y \parallel ^2\), we arrive at

$$\begin{aligned}\begin{aligned} {\dot{V}}(t)&\le - K_1\{ \parallel X \parallel ^2 + \parallel Y \parallel ^2\} + D_0\big ( \alpha \parallel X \parallel + 2 \parallel Y \parallel \big ) \\ {}&+ D_1\big (\frac{3 \alpha + 2}{2}\big )\parallel X \parallel ^2 + D_1\big (\frac{6 + \alpha }{2} \big )\parallel Y \parallel ^2\\ {}&\le - (K_1 - D_1K_2)\{ \parallel X \parallel ^2 + \parallel Y \parallel ^2\} + D_0\big ( \alpha \parallel X \parallel + 2 \parallel Y \parallel \big ), \end{aligned} \end{aligned}$$

where \(K_2 = \max \{ \big (\frac{3 \alpha + 2 }{2}\big ),\big (\frac{6 + \alpha }{2} \big )\}.\) If we now choose \(D_1 < K_1K_2^{-1}\) and follow the same procedure of Omeike et al. [16], then there exists some \(\beta >0\) such that

$$\begin{aligned} {\dot{V}} \le&-\beta (\Vert X\Vert ^2+\Vert Y\Vert ^2)+k\beta (\Vert X\Vert +\Vert Y\Vert )\\=&-\frac{\beta }{2}(\Vert X\Vert ^2+\Vert Y\Vert ^2)- \frac{\beta }{2}\left\{ (\Vert X\Vert -k)^2+(\Vert Y\Vert -k)^2\right\} + \beta k^2\\ \le&-\frac{\beta }{2}(\Vert X\Vert ^2+\Vert Y\Vert ^2)+{\beta }k^2, \end{aligned}$$

for some \(k,\beta >0\).

It is now possible to apply Lemma 2.5 to the solutions of Eq. (1.1) as a consequence of assumption (iii) of Theorem 3.1. Thus, from the proof of Theorem 3.1, we have \(W = \frac{D_2}{2}\{ \parallel X \parallel ^2 + \parallel X \parallel ^2\}\), \(W_1 = \big (\Delta _h + \alpha ^2\big )\parallel X \parallel ^2 + \frac{3}{2} \parallel Y \parallel ^2 \), \(W_2 = \lambda r(t)\) and \(W_3 = \frac{\beta }{2}(\Vert X\Vert ^2+\Vert Y\Vert ^2)\). Hence, by Lemma 2.5, we conclude that all the solutions of Eq. (1.1) or system (1.2) are uniform-ultimately bounded. \(\square \)

4 Example

We provide the following example as a special case of equation (1.1).

Example 4.1

$$\begin{aligned} \left( \begin{array}{c} x_1^{{\prime \prime }} \\ x_2^{\prime \prime } \end{array} \right)&+ \left( \begin{array}{cc} 13 + \exp ^{-(x_1^2 + x_2^2)} &{} 0 \\ 0 &{} 13 + \exp ^{-(x^{\prime 2}_1 + x^{\prime 2}_2)} \end{array} \right) \left( \begin{array}{c} x^{\prime }_1 \\ x^{\prime }_2 \end{array} \right) \\ {}&+ \left( \begin{array}{c} 2x_1(t-r(t)) + \sin x_1(t-r(t)) \\ 2x_2(t-r(t)) + \sin x_2(t-r(t)) \end{array} \right) \\ {}&= \left( \begin{array}{c} \frac{x_1 + x^{\prime }_1 + 1}{1 +t^2} \\ \frac{x_2 + x^{\prime }_2 + 1}{1 + t^2} \end{array} \right) , \end{aligned}$$

where

$$\begin{aligned} H(X(t-r(t)))= & {} \left( \begin{array}{c} 2x_1(t-r(t)) + \sin x_1(t-r(t)) \\ 2x_2(t-r(t)) + \sin x_2(t-r(t)) \end{array} \right) , ~ r(t) = \frac{1}{8}\cos ^2t,\\ F(X, X^{\prime } )= & {} \left( \begin{array}{cc} 13 + \exp ^{-(x_1^2 + x_2^2)} &{} 0 \\ 0 &{} 13 + \exp ^{-(x^{\prime 2}_1 + x^{\prime 2}_2)} \end{array} \right) ~ \text{ and }~ \\ P(t, X, X^{\prime })= & {} \left( \begin{array}{c} \frac{x_1 + x^{\prime }_1 + 1}{1 +t^2} \\ \frac{x_2 + x^{\prime }_2 + 1}{1 + t^2} \end{array} \right) . \end{aligned}$$

The variable delay r(t) and it’s derivative \(r^{\prime }(t)\), respectively, satisfy \(0 \le r(t) = \frac{1}{8}\cos ^2t \le \frac{1}{8} = \gamma \) and \(r^{\prime }(t) = -\frac{1}{4}\sin t\cos t < \frac{1}{4} = \xi \).

The eigenvalues of \(F(X,X^{\prime })\) are

$$\begin{aligned} \lambda _1(F(X,X^{\prime })) = 13 + \exp ^{-(x_1^2 + x_2^2)}, \end{aligned}$$

and

$$\begin{aligned} \lambda _2(F(X,X^{\prime })) = 13 + \exp ^{-(x^{\prime 2}_1 + x^{\prime 2}_2)}. \end{aligned}$$

Hence, we have \(\delta _f = 13 \le \lambda _i(F(X,X^{\prime })) \le 14 = \Delta _f.\)

Also, the Jacobian matrix \(J_H(X(t-r(t)))\) of \(H(X(t-r(t)))\) is

$$\begin{aligned} J_H(X(t-r(t))) = \left( \begin{array}{cc} 2 + \cos x_1(t-r(t)) &{} 0 \\ 0 &{} 2 + \cos x_2(t-r(t)) \end{array} \right) , \end{aligned}$$

and its eigenvalues satisfy \( \delta _h = 1 \le \lambda _i(J_H(X)) \le 3 = \Delta _h.\)

From the above calculations, we have, \(\delta _f = 13, \Delta _f = 14, \delta _h = 1.1, \Delta _h = 4, \epsilon = 1, \alpha = 14, \gamma = \frac{1}{8}, \xi = \frac{1}{4}.\)

Therefore,

$$\begin{aligned} 0< \gamma = \frac{1}{8} < \min \Bigg ( \frac{2 \delta _h - \epsilon }{\Delta _h}, \frac{(1 - \xi )( 2 \alpha - \epsilon (\alpha + 4) )}{\Delta _h\big (2(2- \xi ) + \alpha \Big )}\Bigg )= \min \Big ( \frac{1}{3}, \frac{1}{7}\Big ) = \frac{1}{7}. \end{aligned}$$

Lastly,

$$\begin{aligned} P(t, X, X^{\prime })= & {} \frac{1}{1 +t^2}\left( \begin{array}{c} x_1 + x^{\prime }_1 + 1 \\ x_2 + x^{\prime }_2 + 1 \end{array} \right) \\ | P(t, X, X^{\prime })|\le & {} \big ( 2 + \sqrt{3}\{ \parallel X \parallel + \parallel X^{\prime } \parallel \} \big ) . \end{aligned}$$

Hence, the example satisfied all the conditions of the theorem.

5 Conclusion

In this paper, we made use of a suitable Lypunov–Krasovskii functional to establish sufficient conditions for the uniform-ultimate boundedness of solutions to certain second order non-linear vector differential equation. An example is given to demonstrate the correctness of our result.