1 Introduction

In the recent years, the stability theory and asymptotic behavior of differential equations and their applications have been and still are receiving intensive attention. The problem of the boundednes and stability of solutions of vector differential equations has been studied by many authors, who have provided many techniques especially for delay differential equations.

In 1985, Abou El Ala [1] gave sufficient conditions that ensure that all solutions of real vector differential equations of the form

$$\begin{aligned} X^{\prime \prime \prime }+F(X, X^{\prime })X^{\prime \prime }+G(X^{\prime })+H(X)=P(t, X, X^{\prime }, X^{\prime \prime }), \end{aligned}$$

are ultimately bounded. Afterward, in 2006 Tunç [27] also proved some results on the asymptotic stability and the boundedness of solutions of vector differential equation

$$\begin{aligned} X^{\prime \prime \prime }+F(X, X^{\prime }, X'')X^{\prime \prime }+G(X^{\prime })+H(X)=P(t, X, X^{\prime }, X^{\prime \prime }). \end{aligned}$$

Our aim in this paper, by using Lyapunov second method is to study the asymptotic stability and the uniform ultimate boundedness of third-order nonlinear vector differential equation with bounded delay

$$\begin{aligned}&(\Psi (X')X^{\prime \prime })^{\prime }+F(X, X^{\prime }, X^{\prime \prime })X^{\prime \prime }+G(X(t-r(t)), X^{\prime }(t-r(t)))\nonumber \\&\quad +\,H(X(t-r(t)))=P(t, X, X^{\prime }, X^{\prime \prime }), \end{aligned}$$
(1.1)

when \(P\equiv 0\) and \(P\ne 0\) respectively, in which \(X\in {\mathbb {R}}^{n}\), \(\Psi :{\mathbb {R}}^{n}\rightarrow {\mathbb {R}} ^{n\times n}, F:{\mathbb {R}}^{n}\times {\mathbb {R}}^{n}\times {\mathbb {R}}^{n}\rightarrow {\mathbb {R}} ^{n\times n}\), \(G :{\mathbb {R}}^{n}\times {\mathbb {R}}^{n} \rightarrow {\mathbb {R}}^{n}\), \(H:{\mathbb {R}}^{n}\rightarrow {\mathbb {R}}^{n}\) and \(P: {\mathbb {R}}_{+}\times {\mathbb {R}}^{n}\times {\mathbb {R}}^{n}\times {\mathbb {R}}^{n}\rightarrow {\mathbb {R}}^{n}\) are continuous differentiable functions with (\( H(0)=G(X, 0)=0\)) and \(\Psi \) is twice differentiable, where r(t) is differentiable and \(0\le r(t)\le \gamma \), \(r^{\prime }(t)\le \beta _{0}\) , \(0<\beta _{0}<1\), \(\gamma \) will be determined later, and the primes in (1.1) denote differentiation with respect to t, \(t\in {\mathbb {R}}^{+}\).

Finally, the continuity of the functions \(\Psi , F, G, H\) and P guarantee the existence of the solution of (1.1). In addition, we assume that the functions \(\Psi , F, G, H\) and P satisfy a Lipschitz condition with respect to their respective arguments, like \(X, X'\) and \(X''\). In this case, the uniqueness of solutions of the equation (1.1) is guaranteed.

This work extends further a result given by Graef [10, 11], Remili [15,16,17,18,19,20,21,22,23,24] and Tunç [28,29,30,31,32].

2 Preliminaries

The symbol \(\langle X,Y\rangle \) corresponding to any pair X and Y in \({\mathbb {R}}^{n}\) stands for the usual scalar product \( \sum _{i=1}^{n}x_{i}y_{i},\) that is, \(\langle X,Y\rangle =\sum _{i=1}^{n}x_{i}y_{i},\) Thus \(\langle X,X\rangle =\left\| X\right\| ^{2}.\)

The following results will be basic to the proofs of Theorems.

Lemma 2.1

[3, 4, 7,8,9, 25] Let D be a real symmetric positive definite \(n\times n\) matrix, then for any X in \({\mathbb {R}}^{n}\), we have

$$\begin{aligned} \delta _{d}\parallel X\parallel ^{2}\le \left\langle DX,X\right\rangle \le \Delta _{d}\parallel X\parallel ^{2}, \end{aligned}$$

where \(\delta _{d}\), \(\Delta _{d}\) are the least and the greatest eigenvalues of D, respectively.

Lemma 2.2

[3, 4, 7,8,9, 25] Let QD be any two real \(n\times n\) commuting matrices, then

  1. (i)

    The eigenvalues \(\lambda _{i}\left( QD\right) \left( i=1,2\ldots ,n\right) \) of the product matrix QD are all real and satisfy

    $$\begin{aligned} \underset{1\le j,k\le n}{\min }\lambda _{j}\left( Q\right) \lambda _{k}\left( D\right) \le \lambda _{i}\left( QD\right) \le \underset{1\le j,k\le n}{\max }\lambda _{j}\left( Q\right) \lambda _{k}\left( D\right) . \end{aligned}$$
  2. (ii)

    The eigenvalues \(\lambda _{i}\left( Q+D\right) \left( i=1,2\ldots ,n\right) \) of the sum of matrices Q and D are all real and satisfy.

$$\begin{aligned} \left\{ \underset{1\le j\le n}{\min }\lambda _{j}\left( Q\right) +\underset{1\le k\le n}{\min }\lambda _{k}\left( D\right) \right\} \le \lambda _{i}\left( Q+D\right) \le \left\{ \underset{1\le j\le n}{\max }\lambda _{j}\left( Q\right) +\underset{1\le k\le n}{\max }\lambda _{k}\left( D\right) \right\} . \end{aligned}$$

Lemma 2.3

[2, 26] Let H(X) be a continuous vector function with \(H(0)=0\).

$$\begin{aligned} (1)&\quad \frac{d}{dt}\left( \int _{0}^{1}\left\langle H\left( \sigma X\right) ,X\right\rangle d\sigma \right) =\left\langle H\left( X\right) ,X^{\prime }\right\rangle . \\ (2)&\quad \frac{d}{dt}\left( \int _{0}^{1}\left\langle \sigma H \left( \sigma Y\right) Y , Y\right\rangle d\sigma \right) =\left\langle H\left( Y\right) Y, Y^{\prime }\right\rangle . \\ (3)&\quad \frac{d}{dt}\left( \int _{0}^{1}\langle H(X, \sigma Y)Y, Y\rangle d\sigma \right) =\langle H(X, Y)Y, Z\rangle +\int _{0}^{1} \langle J(H(X, \sigma Y)Y|X)Y, Y\rangle d\sigma . \end{aligned}$$

Lemma 2.4

[7,8,9, 13, 25] Let H(X) be a continuous vector function with \(H(0)=0\).

$$\begin{aligned} (1)&\quad \langle H(X),H(X)\rangle =2\int _{0}^{1}\int _{0}^{1}\sigma \langle J_{H}(\sigma X)J_{H}(\sigma \tau X)X,X\rangle d\sigma d\tau .\\ (2)&\quad \langle C(t)H(X),X \rangle =\int _{0}^{1} \langle C(t) J_{H}(\sigma X) X,X \rangle d\sigma .\\ (3)&\quad \int _{0}^{1}\langle C(t)H(\sigma X),X\rangle d\sigma =\int _{0}^{1}\int _{0}^{1}\sigma [\langle C(t)J_{H}(\sigma \tau X)X,X\rangle ]d\sigma d\tau . \end{aligned}$$

Lemma 2.5

Let H(X) be a continuous vector function and that \(H(0)=0\) then,

$$\begin{aligned} \delta _{h}\parallel X\parallel ^{2}\le \int _{0}^{1}\left\langle H\left( \sigma X\right) ,X\right\rangle d\sigma \le \Delta _{h}\parallel X\parallel ^{2}. \end{aligned}$$

where \(\delta _{h}\), \(\Delta _{h}\) are the least and the greatest eigenvalues of \(J_{h}(X)\) (Jacobian matrix of H), respectively.

Definition 2.6

We definite the spectral radius \(\rho \left( A\right) \) of a matrix A by

$$\begin{aligned} \rho \left( A\right) =\max \left\{ \lambda /\,\, \lambda \text { is eigenvalue of }A\right\} . \end{aligned}$$

Lemma 2.7

For any \(A\in {\mathbb {R}}^{n\times n},\) we have the norm \( \left\| A\right\| =\sqrt{\rho \left( A^{T}A\right) }\) if A is symmetric then

$$\begin{aligned} \left\| A\right\| =\rho \left( A\right) . \end{aligned}$$

We shall note all the equivalents norms by the same notation \( \left\| X\right\| \) for \(X\in {\mathbb {R}}^{n}\) and \(\left\| A\right\| \) for a matrix \(A\in {\mathbb {R}}^{n\times n}\).

3 Stability

The following notations (see [14]) will be useful in subsequent sections. For \(x\in \) \({\mathbb {R}} ^{n}\), \(\left| x\right| \) is the norm of x. For a given \( r>0,t_{1}\) \(\in \) \({\mathbb {R}}\),

$$\begin{aligned} C(t_{1})=\{\phi :[t_{1}-r,t_{1}]\rightarrow {\mathbb {R}}^{n}/\phi \text { is continuous}\}. \end{aligned}$$

In particular, \(C=C(0)\) denotes the space of continuous functions mapping the interval \(\left[ -r,0\right] \) into \({\mathbb {R}}^{n}\) and for \(\phi \in C,\phi ={\sup }_{-r\le \theta \le 0}\left| \phi \left( 0\right) \right| .\) \(C_{H}\) will denote the set of \(\phi \) such that \( \phi \le H\). For any continuous function x(u) defined on \(-h\le u<A,\) where \(A>0,\) and \(0\le t<A,\) the symbol \(x_{t}\) will denote the restriction of x(u) to the interval \(\left[ t-r,t\right] \), that is, \(x_{t}\) is an element of C defined by

$$\begin{aligned} x_{t}(\theta )=x(t+\theta ),\quad -r\le \theta \le 0. \end{aligned}$$

Consider the functional differential equation

$$\begin{aligned} x^{\prime }=f(t,x_{t}),\, \, x_{t}(\theta )=x(t+\theta )\ ,\quad -r\le \theta \le 0,\, \, t\ge 0, \end{aligned}$$
(3.1)

where \(f:I\times C_{H}\rightarrow {\mathbb {R}}^{n}\) is a continuous mapping, \(f(t,0)=0\), \(C_{H}:=\{\phi \in (C[-r,0],\ {\mathbb {R}}^{n}):\Vert \phi \Vert \le H\}\), and for \(H_{1}<H\), there exists \(L(H_{1})>0\), with \(|f(t,\phi )|<L(H_{1})\) when \(\Vert \phi \Vert <H_{1}\).

Definition 3.1

[6] An element \(\psi \in C\) is in the \(\omega -limit\) set of \(\phi \), say \(\Omega (\phi )\), if \(x(t,0,\phi )\) is defined on \([0,+\infty )\) and there is a sequence \(\{t_{n}\},{t_{n}}\rightarrow \infty \), as \(n\rightarrow \infty \), with \(\Vert x_{t_{n}}(\phi )-\psi \Vert \rightarrow 0\) as \( n\rightarrow \infty \) where \(x_{t_{n}}(\phi )=x(t_{n}+\theta ,0,\phi )\ for\)\(-r\le \theta \le 0\).

Definition 3.2

[6] A set \(Q\subset C_{H}\) is an invariant set if for any \(\phi \in Q\) , the solution of (3.1), \(x(t,0,\phi )\), is defined on \([0,\infty )\) and \( x_{t}(\phi )\in Q\) for \(t\in [0,\infty )\).

Lemma 3.3

[5] If \(\phi \in C_{H}\)is such that the solution \(x_{t}(\phi )\) of (3.1) with \(x_{0}(\phi )=\phi \) is defined on \([0,\infty )\) and \(\Vert x_{t}(\phi )\Vert \le H_{1}<H\) for \(t\in [0,\infty )\), then \( \Omega (\phi )\) is a non-empty, compact, invariant set and

$$\begin{aligned} dist(x_{t}(\phi ),\Omega (\phi ))\rightarrow 0\quad \hbox {as }\ t\rightarrow \infty . \end{aligned}$$

Lemma 3.4

[5] Let \(V(t,\phi ):I\times C_{H}\rightarrow {\mathbb {R}}\) be a continuous functional satisfying a local Lipschitz condition. \(V(t,0)=0\), such that:

  1. (i)

    \(W_{1}(|\phi (0)|)\le V(t,\phi ) \le W_{2}(|\phi (0)|)+ W_{3}(\Vert \phi \Vert _{2})\) where \(\Vert \phi \Vert _{2}=(\int _{t-r}^{t}\Vert \phi (s)\Vert ^{2}ds)^{\frac{1}{2}}\).

  2. (ii)

    \({\dot{V}}_{(3.1)}(t, \phi )\le -W_{4}(|\phi (0)|)\),

where, \(W_{i}\, (i=1, 2, 3, 4\)) are wedges. Then the zero solution of (3.1) is uniformly asymptotically stable.

Notation and definitions

The Jacobian matrices \(J_{G_{X}}\left( X, Y\right) , J_{G_{Y}}\left( X, Y\right) , J_{H}\left( X\right) , J\big ( F(X, Y, 0)Y|X\big )\) and \(J\big ( F(X, Y, Z)Y|Z\big )\) are given by

$$\begin{aligned} J_{G_{X}}\left( X, Y\right)= & {} \bigg (\frac{\partial g_{i}}{\partial x_{j}}\bigg ) , \quad \ J_{G_{Y}}\left( X, Y\right) =\bigg (\frac{\partial g_{i}}{\partial y_{j}}\bigg ) , \quad J_{H}\left( X\right) =\bigg (\frac{\partial h_{i}}{\partial x_{j}}\bigg ) ,\\ J\big ( F(X, Y, 0)Y|X\big )= & {} \bigg (\frac{\partial }{\partial x_{j}}\underset{k=1}{\overset{n}{\sum }}f_{ik}y_{k}\bigg )=\bigg (\underset{k=1}{\overset{n}{\sum }}\frac{\partial f_{ik}}{\partial x_{j}}y_{k}\bigg ),\\ J\big ( F(X, Y, Z)Y|Z\big )= & {} \bigg (\frac{\partial }{\partial z_{j}}\underset{k=1}{\overset{n}{\sum }}f_{ik}y_{k}\bigg )=\bigg (\underset{k=1}{\overset{n}{\sum }}\frac{\partial f_{ik}}{\partial z_{j}}y_{k}\bigg ),\\ \end{aligned}$$

4 Assumptions and main results

The following assumptions will be needed throughout the paper. Let \( a, b, c, k, K, L, M, \beta \), and \(\delta \) be an arbitrary but fixed positive numbers, such that the following assumptions are satisfied:

  1. (H1)

    \( k\le \lambda _{j}\left( \Psi (Y)\right) \le K\).

  2. (H2)

    \(G(X,0)=0,\)\(\ b\le \lambda _{j}\left( J_{G_{Y}}\left( X, Y\right) \right) \le M\).

  3. (H3)

    \(\ -L\le \lambda _{j}\left( J_{G_{X}}\left( X, Y\right) \right) \le 0\) .

  4. (H4)

    \(\ H(0)=0\), \(\delta \le \lambda _{j}\left( J_{H}\left( X\right) \right) \le c\).

  5. (H5)

    \(\ aK\le \lambda _{j}\left( F(X, Y, Z)\right) \), \(\lambda _{j}\left( J\big ( F(X, Y, 0)Y|X\big )\right) \le 0\), \(\lambda _{j}\left( J\big ( F(X, Y, Z)Y|Z\big )\right) \ge 0\).

For ease of exposition throughout this paper we will adopt the following notation

$$\begin{aligned} \eta (t)= & {} \int _{0 }^{t }\left\| \Gamma (s)\right\| ds,\\ \Gamma (t)= & {} \frac{d}{dt} \Psi ^{-1}(Y(t))=-\Psi ^{-1}\big (Y(t)\big )\big [\frac{d}{dt} \Psi \big (Y(t)\big )\big ] \Psi ^{-1}\big (Y(t)\big ),\\ \Delta (t)= & {} \int _{t-r(t)}^{t}\left\{ J_{H}\left( X(s)\right) Y(s)+J_{G_{X}}Y(s)+J_{G_{Y}}\Psi ^{-1}(Y(s))Z(s)\right\} ds,\\ A_{1}= & {} \frac{1}{2}\left( 1+\frac{1}{k}\right) +aK+\delta ^{-1}\Vert G(X, Y)Y^{-1}-b\Vert ^{2},\\ A_{2}= & {} \frac{1}{2}\left( 1+\frac{1}{k}\right) +\delta ^{-1}\Vert F(X, Y, \Psi ^{-1}(Y)Z)-aI\Vert ^{2}. \end{aligned}$$

The main problem of this section is the following theorem for \(P(.)=0\).

Theorem 4.1

In addition to conditions (H1)–(H5) being satisfied, suppose that the following is also satisfied

  1. (i)

    \( \displaystyle \int _{0}^{\infty }\left\| \frac{d}{d s} \Psi \big (Y(s)\big )\right\| ds<\infty \).

  2. (ii)

    \(\displaystyle \frac{c}{b}<\alpha <a\).

  3. (iii)

    \(\displaystyle \beta <\min \{(ab-c)(aK)^{-1}, (ab-c)A_{1}^{-1}, \frac{1}{2 K}(a-\alpha ) A_{2}^{-1}\}\).

Then every solution of (1.1) is uniformly asymptotically stable, provided that

$$\begin{aligned} \gamma <\min \left\{ \delta A_{3}^{-1} ,2(1-\beta _{0})(\alpha b-c)A_{4}^{-1}, k^{2}(1-\beta _{0})(a-\alpha )A_{5}^{-1}\right\} , \end{aligned}$$

where

$$\begin{aligned} A_{3}= & {} L+M+c,\\ A_{4}= & {} (1-\beta _{0})(a+\alpha )A_{3}+(L+c)(2+\alpha +a+\beta ), \end{aligned}$$

and

$$\begin{aligned} A_{5}=2K(1-\beta _{0})A_{3}+MK(2+\alpha +a+\beta ). \end{aligned}$$

Proof

We write the Eq. (1.1) as the following equivalent system

$$\begin{aligned} X^{\prime }= & {} Y \nonumber \\ Y^{\prime }= & {} \Psi ^{-1}(Y)Z \nonumber \\ Z^{\prime }= & {} -F\big (X, Y, \Psi ^{-1}(Y)Z\big )\Psi ^{-1}(Y)Z-G(X, Y)-H(X)+\Delta (t). \end{aligned}$$
(4.1)

We shall use as a tool to prove our main results a Lyapunov function \(W=W(t, X_{t}, Y_{t}, Z_{t})\) defined by

$$\begin{aligned} \displaystyle W(X_{t}, Y_{t}, Z_{t})=\exp \left( -\frac{\eta (t)}{\mu }\right) V(X_{t}, Y_{t}, Z_{t})=\exp \left( -\frac{\eta (t)}{\mu }\right) V, \end{aligned}$$
(4.2)

where

$$\begin{aligned} V= & {} (\alpha +a)\int _{0}^{1}\langle H(\sigma X), X \rangle d\sigma +2\int _{0}^{1}\langle G(X, \sigma Y), Y\rangle d\sigma \nonumber \\&+\,(\alpha +a)\int _{0}^{1}\sigma \langle F(X, \sigma Y, 0)Y, Y \rangle d\sigma \nonumber \\&+\,2 \langle H(X), Y \rangle +\langle \Psi ^{-1}(Y)Z, Z \rangle +a\beta \langle \Psi (Y)X, Y \rangle \nonumber \\&+\,\beta \langle X, Z \rangle +(\alpha +a) \langle Y, Z \rangle \nonumber \\&+\,\frac{1}{2}b\beta \langle X, X \rangle +\frac{1}{2}\beta \langle Y, Y \rangle +\int _{-r(t)}^{0}\int _{t+s}^{t}\left\{ \lambda _{1}\Vert Y(\theta )\Vert ^{2}+\lambda _{2}\Vert Z(\theta )\Vert ^{2}\right\} d\theta ds, \end{aligned}$$
(4.3)

such that \(\mu \) is positive constant which will be specified later in the proof. From the definition of V in (4.3), we observe that the above Lyapunov functional can be rewritten as follows

$$\begin{aligned} V= & {} \frac{1}{b}\int _{0}^{1}\int _{0}^{1}\sigma \left\langle \left[ (\alpha +a)b-2J_{H}(\tau \sigma X)\right] J_{H}(\sigma X)X, X \right\rangle d\tau d\sigma \\&+\,2\int _{0}^{1}\int _{0}^{1}\sigma \left\langle \left[ J_{G_{Y}}(X, \tau \sigma Y)-bI\right] Y, Y \right\rangle d\tau d\sigma \\&+\,\frac{1}{b}\Vert H(X)+bY\Vert ^{2}+\int _{0}^{1}\left\langle \left[ (\alpha +a)\sigma F(X,\sigma Y, 0)-\frac{1}{2}(\alpha ^{2}+a^{2})\Psi (Y)\right] Y, Y \right\rangle d\sigma \\&+\,\frac{1}{2}\Vert \alpha \Psi ^{\frac{1}{2}} (Y)Y+\Psi ^{\frac{-1}{2}}(Y)Z\Vert ^{2} \\&+\,\frac{1}{2}\Vert \beta \Psi ^{\frac{1}{2}}(Y)X+a\Psi ^{\frac{1}{2}}(Y)Y\\&+\,\Psi ^{\frac{-1}{2}}(Y)Z\Vert ^{2}+\frac{1}{2}\beta \Vert Y\Vert ^{2} +\frac{1}{2}\beta \langle (bI-\beta \Psi (Y))X, X \rangle \\&+\,\int _{-r(t)}^{0}\int _{t+s}^{t}\left\{ \lambda _{1}\Vert Y(\theta )\Vert ^{2}+\lambda _{2}\Vert Z(\theta )\Vert ^{2}\right\} d\theta ds. \end{aligned}$$

Since

$$\begin{aligned} \int _{-r(t)}^{0}\int _{t+s}^{t}\left\{ \lambda _{1}\Vert Y(\theta )\Vert ^{2}+\lambda _{2}\Vert Z(\theta )\Vert ^{2}\right\} d\theta ds \end{aligned}$$

is positive and using the conditions (i)–(iv) and (vi) of the theorem, we find

$$\begin{aligned} V\ge & {} \frac{\delta }{2b}\left[ (\alpha +a)b-2c+\frac{b\beta }{\delta }(b-\beta K)\right] \Vert X\Vert ^{2}+\frac{1}{2}[\beta +\alpha K(a-\alpha )]\Vert Y\Vert ^{2} \\&+\,\frac{1}{2}\Vert \alpha \Psi ^{\frac{1}{2}} (Y)Y+\Psi ^{\frac{-1}{2}}(Y)Z\Vert ^{2}+\frac{1}{2}\Vert \beta \Psi ^{\frac{1}{2}}(Y)X+a\Psi ^{\frac{1}{2}}(Y)Y+\Psi ^{\frac{-1}{2}}(Y)Z\Vert ^{2}. \end{aligned}$$

From (ii) and (iii) we obtain that for sufficiently small positive constant \(\delta _{1}\)

$$\begin{aligned} V\ge \delta _{1}\left( \left\| X\right\| ^{2}+\left\| Y\right\| ^{2}+\left\| Z\right\| ^{2}\right) . \end{aligned}$$
(4.4)

Assumptions (iii) and (vii) imply the following:

$$\begin{aligned} \eta (t)=\int _{0}^{t}\Vert \Gamma (s)\Vert ds \le K^{-2}\int _{0}^{t }\left\| \frac{d}{d s} \Psi \big (Y(s)\big )\right\| ds\le N<\infty , \end{aligned}$$

this may be combined with (4.4) to obtain

$$\begin{aligned} W\ge \delta _{1}e^{-\frac{N}{\mu }}\left( \left\| X\right\| ^{2}+\left\| Y\right\| ^{2}+\left\| Z\right\| ^{2}\right) . \end{aligned}$$
(4.5)

Now, we can deduce that there exists a continuous function \(W_{1}\) with

$$\begin{aligned} W_{1}(|\phi (0)|)\ge 0 \quad \text {and}\quad W_{1}(|\phi (0)|)\le W(t,\phi ). \end{aligned}$$

The existence of a continuous function \(W_{2}(|\phi (0)|)+ W_{3}(\Vert \phi \Vert _{2})\) which satisfies the inequality \(W(t,\phi )\le W_{2}(|\phi (0)|)+ W_{3}(\Vert \phi \Vert _{2})\), is easily verified.

Now, let \((X, Y, Z)=(X(t), Y(t), Z(t))\) be any solution of differential system (4.1).

Differentiating the function V, defined in (4.3), along system (4.1 ) with respect to the independent variable t, we have

$$\begin{aligned} V^{\prime }= & {} (\alpha +a)\int _{0}^{1}\langle J(F(X, \sigma Y, 0)Y|X)Y, Y \rangle d\sigma +a\beta \langle \Psi (Y)Y, Y \rangle \\&+\,\langle (\beta X+(\alpha +a)Y+2\Psi ^{-1}(Y)Z), \Delta (t) \rangle \\&+\,r(t)(\lambda _{1}\Vert Y\Vert ^{2}+\lambda _{2}\Vert Z\Vert ^{2})+\beta \langle Y, (I+\Psi ^{-1}(Y))Z \rangle \\&+\,2\int _{0}^{1}J_{G_{X}}(X, \sigma Y)Y, Y>d\sigma -\beta \langle X, G(X, Y) -bY \rangle \\&-\,\beta \left\langle X, \left[ F(X, Y, \Psi ^{-1}(Y)Z)-aI\right] Z \right\rangle \\&-\,\beta \langle X, H(X)\rangle -\left\langle \left[ (\alpha +a)G(X, Y)-2J_{H}(X)Y\right] , Y \right\rangle \\&-\,\left\langle \left[ 2F\big (X, Y, \Psi ^{-1}(Y)Z\big )\Psi ^{-1}(Y)-(\alpha +a)I\right] Z, \Psi ^{-1}(Y)Z \right\rangle \\&+\,\left\langle \Gamma (t)Z, Z \right\rangle -a\beta \langle \Psi (Y)\Gamma (t)\Psi (Y)X, Y \rangle \\&-\,(\alpha +a)\left\langle \left[ F\bigg (X, Y, \Psi ^{-1}(Y)Z\bigg )-F(X, Y, 0)\right] Y, \Psi ^{-1}(Y)Z \right\rangle \\&-\,(1-r^{\prime }(t))\int _{t-r(t)}^{t}\left\{ \lambda _{1}\Vert Y(s )\Vert ^{2}+\lambda _{2}\Vert Z(s )\Vert ^{2}\right\} ds. \end{aligned}$$

Using the Schwartz inequality \(2\left| \left\langle U,V\right\rangle \right| \le \left\| U\right\| ^{2}+\left\| V\right\| ^{2}\), we obtain the following

$$\begin{aligned}&\langle (\beta X+(\alpha +a)Y+2\Psi ^{-1}(Y)Z), \Delta (t)\rangle \\&\quad \le \frac{A_{3}}{2}r(t) (\beta \Vert X\Vert ^{2}+(\alpha +a)\Vert Y\Vert ^{2}+\frac{2}{k^{2}}\Vert Z\Vert ^{2}) \\&\qquad +\,\frac{M}{2k^{2}}(2+\alpha +a+\beta )\int _{t-r(t)}^{t}\Vert Z(s)\Vert ^{2}ds\\&\qquad +\,\frac{1}{2}(2+\alpha +a+\beta )(L+c)\int _{t-r(t)}^{t}\Vert Y(s)\Vert ^{2}ds. \end{aligned}$$

Since

$$\begin{aligned} F(X, Y, Z)-F(X, Y, 0)=J(F(X, Y, \theta Z)|Z)Z \quad \text {with } 0\le \theta \le 1, \end{aligned}$$

then by (H5) we get

$$\begin{aligned} \Omega= & {} -\,(\alpha +a) \left\langle \left[ F\bigg (X, Y, \Psi ^{-1}(Y)Z\bigg )-F(X, Y, 0)\right] Y, \Psi ^{-1}(Y)Z\right\rangle \\= & {} -\,(\alpha +a)J\bigg (F(X, Y, \theta \Psi ^{-1}(Y)Z)Y|Z\bigg )\Vert \Psi ^{-1}(Y)Z\Vert ^{2}\le 0, \end{aligned}$$

Consequently by the hypothesis (H1)–(H5) we get

$$\begin{aligned} V^{\prime }\le & {} \frac{A_{3}}{2}r(t) \left( \beta \Vert X\Vert ^{2}+(\alpha +a)\Vert Y\Vert ^{2}+\frac{2}{k^{2}}\Vert Z\Vert ^{2}\right) +r(t)\big (\lambda _{1}\Vert Y\Vert ^{2}+\lambda _{2}\Vert Z\Vert ^{2}\big )-\frac{1}{2}\beta \delta \Vert X\Vert ^{2}\\&-\,(\alpha b-c)\Vert Y\Vert ^{2}-\frac{1}{2K}(a-\alpha )\Vert Z\Vert ^{2}+\Vert \Gamma (t)\Vert \left( \Vert Z\Vert ^{2}+\frac{ a\beta K ^{2}}{2}(\Vert X\Vert ^{2}+\Vert Y\Vert ^{2})\right) \\&-\,\left\{ ab-c-\beta \left[ \frac{1}{2}\left( 1+\frac{1}{k}\right) +aK+\delta ^{-1}\Vert G(X, Y)Y^{-1}-b\Vert ^{2}\right] \right\} \Vert Y\Vert ^{2} \\&-\,\left\{ \frac{1}{2 K}(a-\alpha )-\beta \left[ \frac{1}{2}\left( 1+\frac{1}{k} \right) +\delta ^{-1}\Vert F(X, Y, \Psi ^{-1}(Y)Z)-aI\Vert ^{2}\right] \right\} \Vert Z\Vert ^{2}\\&-\,\frac{\beta }{4\delta }\Vert \delta X+2(G(X, Y)-b Y)\Vert ^{2}- \frac{\beta }{4\delta }\Vert \delta X+2(F(X, Y, \Psi ^{-1}(Y)Z)-aI) Z\Vert ^{2} \\&-\,\left[ \lambda _{1}(1-r^{\prime }(t))-\frac{1}{2}(2+\alpha +a+\beta )(L+c)\right] \int _{t-r(t)}^{t} \Vert Y(s )\Vert ^{2}ds\\&-\,\left[ \lambda _{2}(1-r^{\prime }(t))-\frac{M}{2k^{2}}(2+\alpha +a+\beta )\right] \int _{t-r(t)}^{t}\Vert Z(s )\Vert ^{2}ds. \end{aligned}$$

Now, in view of estimates (ii), (iii), the fact that \(0\le r(t)\le \gamma \) and \(r^{\prime }(t)\le \beta _{0}\) \(0<\beta _{0}<1\), we have

$$\begin{aligned} V^{\prime }\le & {} -\frac{\beta }{2}\left[ \delta -A_{3}\gamma \right] \Vert X\Vert ^{2}+\Vert \Gamma (t)\Vert \left( \Vert Z\Vert ^{2}+\frac{ a\beta K ^{2}}{2}(\Vert X\Vert ^{2}+\Vert Y\Vert ^{2})\right) \\&-\,\left\{ \alpha b-c-\left[ \frac{1}{2}(\alpha +a)A_{3}+\lambda _{1}\right] \gamma \right\} \Vert Y\Vert ^{2}-\left\{ \frac{1}{2K}(a-\alpha )-\left( \frac{A_{3}}{k^{2}}+\lambda _{2}\right) \gamma \right\} \Vert Z\Vert ^{2} \\&-\,\left[ \lambda _{1}(1-\beta _{0})-\frac{1}{2}(2+\alpha +a+\beta )(L+c) \right] \int _{t-r(t)}^{t}\Vert Y(s )\Vert ^{2}ds \\&-\,\left[ \lambda _{2}(1-\beta _{0})-\frac{M}{2k^{2}}(2+\alpha +a+\beta ) \right] \int _{t-r(t)}^{t}\Vert Z(s )\Vert ^{2}ds. \end{aligned}$$

Let

$$\begin{aligned} \displaystyle \lambda _{1}=\frac{(2+\alpha +a+\beta )(L+c)}{2(1-\beta _{0})}\quad \text {and} \quad \displaystyle \lambda _{2}=\frac{M(2+\alpha +a+\beta )}{ 2k^{2}(1-\beta _{0})}. \end{aligned}$$

Hence,

$$\begin{aligned} V^{\prime }\le & {} -\frac{\beta }{2}\left[ \delta -A_{3}\gamma \right] \Vert X\Vert ^{2}+N_{1}\Vert \Gamma (t)\Vert \big (\Vert X\Vert ^{2}+\Vert Y\Vert ^{2}+\Vert Z\Vert ^{2}\big ) \\&-\,\left\{ \alpha b-c-\left[ \frac{1}{2}(\alpha +a)A_{3}+\frac{(2+\alpha +a+\beta )(L+c)}{2(1-\beta _{0})}\right] \gamma \right\} \Vert Y\Vert ^{2} \\&-\,\left\{ \frac{1}{2K}(a-\alpha )-\left( \frac{A_{3}}{k^{2}}+\frac{M(2+\alpha +a+\beta )}{2k^{2}(1-\beta _{0})}\right) \gamma \right\} \Vert Z\Vert ^{2}, \end{aligned}$$

where \(\displaystyle N_{1}=\max \left\{ 1,\frac{a\beta K^{2}}{2}\right\} \).

Using (4.2), (4.4) and taking \(\displaystyle \mu =\frac{\delta _{1}}{ N_{1}}\) we obtain:

$$\begin{aligned} \frac{d}{dt}W= & {} \exp \left( -\frac{N_{1}\eta (t)}{\delta _{1}}\right) \left( \frac{d}{dt}V- \frac{N_{1}|\Gamma (t)|}{\delta _{1}}V\right) \nonumber \\\le & {} \exp \left( -\frac{N_{1}\eta (t)}{\delta _{1}}\right) \left[ -\frac{\beta }{2}\left[ \delta -A_{3}\gamma \right] \Vert X\Vert ^{2}\right. \nonumber \\&\quad -\left\{ \alpha b-c-\left[ \frac{1}{2}(\alpha +a)A_{3}+\frac{(2+\alpha +a+\beta )(L+c)}{2(1-\beta _{0})}\right] \gamma \right\} \Vert Y\Vert ^{2} \nonumber \\&\quad \left. -\left\{ \frac{1}{2K}(a-\alpha )-\left( \frac{A_{3}}{k^{2}}+\frac{M(2+\alpha +a+\beta )}{2k^{2}(1-\beta _{0})}\right) \gamma \right\} \Vert Z\Vert ^{2}\right] . \end{aligned}$$
(4.6)

Provided that

$$\begin{aligned} \gamma <\min \left\{ \delta A_{3}^{-1},2(1-\beta _{0})(\alpha b-c)A_{4}^{-1},k^{2}(1-\beta _{0})(a-\alpha )A_{5}^{-1}\right\} , \end{aligned}$$

the inequality (4.6) becomes

$$\begin{aligned} \frac{d}{dt}W(X_{t}, Y_{t}, Z_{t})\le -\delta _{2}\big (\Vert X\Vert ^{2}+\Vert Y\Vert ^{2}+\Vert Z\Vert ^{2}\big ),\quad \text {for some }\ \ \delta _{2}>0. \end{aligned}$$
(4.7)

Thus, all the conditions of Lemma 3.4 are satisfied. This shows that every solution of (1.1) is uniformly asymptotically stable. \(\square \)

5 Boundedness of solutions

First, consider a system of delay differential equations

$$\begin{aligned} x^{\prime }=F(t,x_{t}), x_{t}(\theta )=x(t+\theta ) , -r\le \theta \le 0 , t\ge 0, \end{aligned}$$
(5.1)

where \(F:{\mathbb {R}}\times C_{H}\longrightarrow {\mathbb {R}}^{n}\) is a continuous mapping and takes bounded set into bounded sets.

The following lemma is a well-known result obtained by Burton [5].

Lemma 5.1

[5] Let \(V\left( t,\phi \right) :{\mathbb {R}}\times C_{H}\longrightarrow {\mathbb {R}}\) be a continuous and local Lipschitz in \(\phi \). If

  1. (i)

    \(W\left( \left| x\left( t\right) \right| \right) \le V\left( t,x_{t}\right) \le W_{1}\left( \left| x\left( t\right) \right| \right) +W_{2}\left( \int _{t-r(t)}^{t}W_{3}\left( \left| x\left( s\right) \right| \right) ds\right) \),

  2. (ii)

    \(V_{(5.1)}^{\prime }\le W_{3}\left( \left| x\left( s\right) \right| \right) +M \text { for some } M>0, \text { where} \ W\left( r\right) ,W_{i}\left( i=1,2,3\right) \text { are wedges}\),

then the solutions of (5.1) are uniformly bounded and uniformly ultimately bounded for bound B.

To study the ultimate boundedness of solutions of (1.1), we would need to write (1.1) in the form

$$\begin{aligned} X^{\prime }= & {} Y \nonumber \\ Y^{\prime }= & {} \Psi ^{-1}(Y)Z \nonumber \\ Z^{\prime }= & {} -F\big (X, Y, \Psi ^{-1}(Y)Z\big )\Psi ^{-1}(Y)Z-G(X, Y)-H(X)+\Delta (t)\nonumber \\&+\,P(t, X, Y, \Psi ^{-1}(Y)Z). \end{aligned}$$
(5.2)

Thus our main theorem in this section is stated with respect to (5.2) as follows:

Theorem 5.2

One assumes that all the assumptions of Theorem 4.1 and the assumption

$$\begin{aligned} \Vert P(t, X, Y, Z)\Vert \le p_{1}(t)+p_{2}(t)(\Vert X\Vert +\Vert Y\Vert +\Vert Z\Vert ) \end{aligned}$$
(5.3)

hold, where \(p_{1}(t)\) and \(p_{2}(t)\) are continuous functions such that

$$\begin{aligned} p_{1}(t)\le p_{0} \quad p_{2}(t)\le \epsilon , \end{aligned}$$

where \(\epsilon \) and \(p_{0}\) are positive constants. Then all solutions of system (5.2) are uniformly bounded and uniformly ultimately bounded.

Proof

Along any solution (X(t), Y(t), Z(t)) of (5.2), we have

$$\begin{aligned} \frac{d}{dt}W_{(5.2)}=\frac{d}{dt}W_{(4.1)}+ \left\langle \beta X+(\alpha +a)Y+2\Psi ^{-1}(Y)Z, P(t, X, Y, \Psi ^{-1}(Y)Z) \right\rangle . \end{aligned}$$

From (4.7), we obtain

$$\begin{aligned} \frac{d}{dt}W_{(5.2)}\le -\delta _{2}(\Vert X\Vert ^{2}+\Vert Y\Vert ^{2}+\Vert Z\Vert ^{2})+\kappa _{1}(\Vert X\Vert +\Vert Y\Vert +\Vert Z\Vert )\Vert P(t,X, Y, \Psi ^{-1}(Y)Z)\Vert , \end{aligned}$$

where \(\displaystyle \kappa _{1}=\max \left\{ \beta ,\alpha +a,\frac{2}{k}\right\} \).

Choosing \(\epsilon <3^{-1}\kappa _{1}^{-1}\delta _{2}\), then \(\kappa _{2}=\delta _{2} -3\kappa _{1}\epsilon >0\).

In view of (5.3) we have

$$\begin{aligned} \frac{d}{dt}W_{(5.2)}\le -\frac{\kappa _{2}}{2}(\left\| X\right\| ^{2}+\left\| Y\right\| ^{2}+\left\| Z\right\| ^{2})+ \frac{3}{2}\kappa _{1}^{2}p_{0}^{2}\kappa _{2}^{-1}, \end{aligned}$$
(5.4)

since

$$\begin{aligned} \frac{\kappa _{2}}{2}\left\{ \bigg (\Vert X\Vert -\kappa _{1}p_{0}\kappa _{2}^{-1}\bigg ) ^{2}+\bigg (\Vert Y\Vert -\kappa _{1}p_{0}\kappa _{2}^{-1}\bigg )^{2}+\bigg (\Vert Z\Vert -\kappa _{1}p_{0}\kappa _{2}^{-1}\bigg )^{2}\right\} \ge 0, \end{aligned}$$

for all XY and Z. From estimate (5.4), the hypothesis (ii) of Lemma 5.1 is satisfied. Also from estimates (4.5) and by the fact that \( W(t,\phi )\le W_{2}(\Vert \phi \Vert )+W_{3}(\int _{t-r(t)}^{t}W_{4}(\phi (s))ds)\), then condition (i) of Lemma 5.1 follows. This completes the proof of the theorem. \(\square \)