Abstract
This paper is concerned with the problem of stability analysis for neural networks with time-varying delays. By constructing a newly augmented Lyapunov functional and some novel techniques, delay-dependent criteria to guarantee the asymptotic stability of the concerned networks are derived in terms of linear matrix inequalities (LMIs). The improvement of feasible region of the proposed criteria comparing with the previous works is shown by two numerical examples.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
In the past few decades, neural networks have been used in many fields due to their extensive applications such as image and signal processing, pattern recognition, fixed-point computations, optimization and other scientific areas [1–8]. Before considering these applications, it is a prerequisite and essential work to check whether the equilibrium points of the designed networks are stable or not since the application of these networks is heavily dependent on the dynamic behavior of the equilibrium points. In the applications of neural networks, it is well recognized that time delays naturally occur due to the finite switching speed of amplifies and may cause performance degradation, oscillation, or even instability of neural networks. Therefore, many attentions have been paid to the delay-dependent stability analysis of neural networks with time delays [9–30] since delay-dependent stability analysis is generally less conservative than delay-independent ones when the sizes of delays are small.
In delay-dependent stability analysis, the most utilized index for checking the conservatism of stability criteria is a maximum delay bound which guarantees the asymptotic stability of the concerned networks. It is generally used that comparing maximum delay bounds with the existing works in other literature is one of the means of showing the superiority of delay-dependent stability criteria. It should be noted that the stability criteria which provide larger delay bounds means that the feasible region of the stability criteria is enlarged. Thus, the applicable region such as state estimation, filtering, synchronization, and other areas can be increased. The remarkable approaches in delay-dependent stability analysis are Jensen inequality [31], model transformation [32, 33], free weighting technique [34], some new Lyapunov–Krasovskii functional [19, 20, 35, 36], zero equalities [37], reciprocally convex optimization [38], and so on. In [29], a new activation function condition was proposed to reduce the conservatism of stability criteria for neural networks with time-varying delays.
Recently, since the delay-partitioning idea which was firstly proposed by Gu [31], some various methods to the stability analysis of neural networks with time delays were proposed in [21–28]. In [21], by dividing the delay intervals into two subintervals and utilizing different free weighting matrices at each subintervals, improved delay-dependent stability criteria for neural networks with interval time-varying delays were proposed. In [23], by introducing a tuning parameter adjusting delay interval, some new delay-partitioning stability criteria for neural networks with time-varying delay were introduced. Kwon et al. [26] proposed some new delay-partitioning method by constructing a different Lyapunov–Krasovskii functional at each delay subintervals. In [28], by utilizing the methods of [21], exponential stability analysis of neural networks with interval time-varying delays and general activation functions was conducted.
Another remarkable approach to reduce the conservatism of delay-dependent stability analysis is to use some new Lyapunov–Krasovskii functional. Thus, more information of system can be utilized, which can increase the feasible region of stability criteria. Since the triple integral form of Lyapunov–Krasovskii functionals were proposed in [35, 36], many researches (see [27, 28] and [30]) studied the problem of stability analysis of neural networks by employing the triple integral terms of Lyapunov–Krasovskii functional. Very recently, by constructing the quadrable-integral terms in Lyapunov–Krasovskii functional, new delay-dependent stability criteria for neural networks with time-varying delays have been reported in [30] based on quadratic convex combination. However, the methods proposed in [30] have some limitation in increasing maximum delay bounds for guaranteeing asymptotic stability of neural networks since system states and activation functions have not been fully utilized as augmented vector. Thus, there are rooms for further improvements in reducing the conservatism of stability criteria.
With motivation mentioned above, in this paper, some improved delay-dependent stability criteria for neural networks with time-varying delays are being proposed. By constructing a newly augmented Lyapunov–Krasovskii functional and proposing some new zero equalities which have not been proposed yet, a sufficient condition such that the considered neural networks are asymptotically stable is derived in terms of LMIs which will be presented in Theorem 1. In Theorem 2, based on the results of Theorem 1 and [39, 40], further improved stability criteria will be proposed by ensuring the positiveness of Lyapunov–Krasovskii functional. Through two numerical examples which were utilized in many previous works to check the conservatism of stability criteria, it will be shown that the proposed stability criteria can provide larger delay bounds than the recent existing results.
Notation
\(\mathbb{R}^{n}\) is the n-dimensional Euclidean space, and \(\mathbb{R}^{m \times n}\) denotes the set of all m×n real matrices. For symmetric matrices X and Y, X>Y (respectively, X≥Y) means that the matrix X−Y is positive definite (respectively, nonnegative). X ⊥ denotes a basis for the null-space of X; I n , 0 n and 0 m⋅n denote n×n identity matrix, n×n and m×n zero matrices, respectively; ∥⋅∥ refers to the Euclidean vector norm or the induced matrix norm; \(\operatorname{diag} \{ \cdots\}\) denotes the block diagonal matrix. For square matrix S, \(\operatorname{Sym}\{S\}\) means the sum of S and its symmetric matrix S T, i.e., \(\operatorname{Sym}\{S\}=S+S^{T}\). Symbol ⋆ represents the elements below the main diagonal of a symmetric matrix. \(X_{[f(t)]} \in\mathbb{R}^{m \times n}\) means that the elements of matrix X [f(t)] include the scalar value of f(t).
2 Problem statement and preliminaries
Consider the following neural networks with discrete time-varying delays:
where \(y(t)= [y_{1} (t),\ldots,y_{n} (t) ]^{T} \in\mathbb{R}^{n}\) is the neuron state vector, n is the number of neurons in a neural network, \(g(y(t)) = [g_{1} (y_{1} (t)),\ldots,g_{n} (y_{n} (t)) ]^{T} \in\mathbb {R}^{n}\) is the neuron activation function, \(g(y(t-h(t))) = [g_{1} (y_{1} (t-h(t))),\ldots,g_{n} (y_{n} (t-h(t))) ]^{T} \in\mathbb{R}^{n}\), \(A=\operatorname{diag}\{a_{i}\} \in\mathbb{R}^{n \times n}\) is a positive diagonal matrix, \(W_{0}=(w^{0}_{ij})_{n \times n}\in\mathbb{R}^{n \times n}\) and \(W_{1}=(w^{1}_{ij})_{n \times n}\in\mathbb{R}^{n \times n}\) are the interconnection matrices representing the weight coefficients of the neurons, and \(b= [b_{1}, b_{2}, \ldots,b_{n} ]^{T}\in\mathbb{R}^{n }\) is a constant input vector.
The delay, h(t), is a time-varying continuous function satisfying
where h U is a known positive scalar and h D is any constant one.
The neuron activation functions satisfy the following assumption.
Assumption 1
The neuron activation functions g i (⋅), i=1,…,n, with g i (0)=0 are continuous, bounded and satisfy
where \(k^{+}_{i}\) and \(k^{-}_{i}\) are constants.
Remark 1
In Assumption 1, \(k^{+}_{i}\) and \(k^{-}_{i}\) can be allowed to be positive, negative, or zero. As mentioned in [19], Assumption 1 describes the class of globally Lipschitz continuous and monotonic nondecreasing activation when \(k^{-}_{i}=0\) and \(k^{+}_{i}>0\). And the class of globally Lipschitz continuous and monotonic increasing activation functions can be described when \(k^{+}_{i}>k^{-}_{i}>0\).
For simplicity, in stability analysis of the neural networks (1), the equilibrium point \(y^{*} = [y^{*}_{1},\ldots,y^{*}_{n} ]^{T}\) whose uniqueness has been reported in [11] is shifted to the origin by utilizing the transformation x(⋅)=y(⋅)−y ∗, which leads the system (1) to the following form:
where \(x(t) = [x_{1} (t),\ldots,x_{n} (t) ]^{T} \in\mathbb{R}^{n}\) is the state vector of the transformed system, f(x(t))=[f 1(x 1(t)),…,f n (x n (t))]T and \(f_{j} (x_{j} (t)) = g_{j} (x_{j} (t) + y^{*}_{j})-g_{j} (y^{*}_{j})\) with f j (0)=0 (j=1,…,n).
It should be noted that the activation functions f i (⋅) (i=1,…,n) with f i (0)=0 satisfy the following condition [4]:
If v=0 in (5), then we have
which is equivalent to
The objective of this paper is to investigate the delay-dependent stability analysis of system (4) which will be conducted in Sect. 3 via some newly augmented Lyapunov–Krasovskii functionals.
Before deriving our main results, the following lemmas will be utilized in deriving the main results.
Lemma 1
[41]
For a positive matrix M, scalars h U >h L >0 such that the following integrations are well defined, it holds that
-
(a)
$$\begin{aligned} &{ -(h_U -h_L) \int ^{t-h_L}_{t-h_U} x^T (s) M x(s)\,ds } \\ &{\quad\leq - \biggl(\int^{t-h_L}_{t-h_U} x (s)\,ds \biggr)^T M \biggl(\int^{t-h_L}_{t-h_U} x (s) \,ds \biggr) ,} \end{aligned}$$(8)
-
(b)
$$\begin{aligned} &{-\bigl(\bigl(h^2_U -h^2_L\bigr)/2\bigr) \int^{t-h_L}_{t-h_U} \int^t_s x^T (u) M x(u)\,du\,ds} \\ &{\quad\leq - \biggl(\int^{t-h_L}_{t-h_U} \int ^t_s x (u) \,du\,ds \biggr)^T} \\ &{\qquad{}\times M \biggl(\int^{t-h_L}_{t-h_U} \int^t_s x (u) \,du\,ds \biggr) ,} \end{aligned}$$(9)
-
(c)
$$\begin{aligned} &{-\bigl(\bigl(h^3_U -h^3_L\bigr)/6\bigr)} \\ &{\qquad{}\times \int^{t-h_L}_{t-h_U} \int^t_s \int^t_u x^T (v) M x(v) \,dv\,du\,ds} \\ &{\quad\leq- \biggl(\int^{t-h_L}_{t-h_U} \int ^t_s \int^t_v x (v) \,dv\,du\,ds \biggr)} \\ &{\qquad{}\times M \biggl( \int^{t-h_L}_{t-h_U} \int^t_s \int^t_v x (v) \,dv\,du\,ds \biggr).} \end{aligned}$$(10)
Lemma 2
[42]
Let \(\zeta\in\mathbb{R}^{n}\), \(\varPhi=\varPhi^{T} \in\mathbb{R}^{n \times n}\), and \(B \in\mathbb{R}^{m \times n}\) such that rank(B)<n. Then, the following statements are equivalent:
-
(1)
ζ T Φζ<0, Bζ=0, ζ≠0;
-
(2)
(B ⊥)T ΦB ⊥<0, where B ⊥ is a right orthogonal complement of B.
Lemma 3
[39]
For the symmetric appropriately dimensional matrices Ω>0, Ξ, and matrix Λ, the following two statements are equivalent:
-
(1)
Ξ−Λ T ΩΛ<0;
-
(2)
There exists a matrix of appropriate dimension Ψ such that
$$\begin{aligned} \left [ \begin{array}{c@{\quad}c} \varXi+ \varLambda^T \varPsi+ \varPsi^T \varLambda& \varPsi^T \\ \varPsi & -\varOmega \end{array} \right ]<0. \end{aligned}$$(11)
3 Main results
In this section, we propose new stability criteria for system (4). For the sake of simplicity of matrix and vector representation, \(e_{i}\ (i=1,2,\ldots,16) \in\mathbb{R}^{16n \times n}\) which will be used in Theorems 1 and 2 are defined as block entry matrices. (For example, \(e_{3}=[0_{n},0_{n}, I_{n}, \underbrace{0_{n},\ldots, 0_{n}}_{13}]^{T}\).) The other notation for some vectors and matrices is defined as:
Now, we have the following theorem.
Theorem 1
For given scalars h U >0, h D , and diagonal matrices \(K_{p}=\operatorname{diag}\{k^{+}_{1},\ldots ,k^{+}_{n}\}\) and \(K_{m}=\operatorname{diag}\{k^{-}_{1},\ldots,k^{-}_{n}\}\), system (4) is asymptotically stable for 0≤h(t)≤h U and \(\dot{h}(t) \leq h_{D} \), if there exist positive diagonal matrices \(\varSigma=\operatorname{diag}\{\sigma _{1},\ldots,\sigma_{n}\}\), \(\varDelta=\operatorname{diag}\{\delta_{1},\ldots,\delta_{n}\}\), \(H_{i}=\operatorname{diag}\{h_{1i},\ldots,h_{ni}\}\) (i=1,…,5), positive definite matrices \(\mathcal{R} \in\mathbb{R}^{6n \times6n}\), \(\mathcal{N} \in\mathbb{R}^{3n \times3n}\), \(\mathcal{G}=[G_{ij}] \in\mathbb{R}^{5n \times5n}\), \(\mathcal{Q}_{1} \in\mathbb{R}^{3n \times3n}\), \(\mathcal{Q}_{2} \in\mathbb{R}^{3n \times3n}\), \(Q_{3} \in\mathbb{R}^{n \times n}\), any matrices \(\mathcal{S}_{1}\in\mathbb{R}^{3n \times3n}\), \(\mathcal{S}_{2}\in\mathbb{R}^{3n \times3n}\), \(\varPsi\in\mathbb{R}^{6n \times15n}\), and any symmetric matrices \(P_{i} \in\mathbb{R}^{n \times n}\) (i=1,…,4), satisfying the following LMIs:
where Φ [h(t)], \(\mathcal{P}_{i}\) (i=1,…,4), Θ and Γ are defined in (12), and Γ ⊥ is the right orthogonal complement of Γ.
Proof
Let us consider the following candidate for the appropriate Lyapunov–Krasovskii functional:
where
and
It should be noted that
and
From (18) and (19), \(\dot{V}_{1}(t)\) can be represented as
Also, from the following equation,
the time-derivative of V 2(t) can be calculated as follows:
Calculation of \(\dot{V}_{3}(t)\) leads to
Inspired by the work of [37], the following two zero equalities with any symmetric matrices P i (i=1,2) are considered:
Furthermore, the following two zero equalities with symmetric matrices P i (i=3,4) are newly introduced:
Summing the four zero equalities presented by Eqs. (24)–(27) leads to
By calculating \(\dot{V}_{4}(t)\), it can be obtained that
With the consideration of the four integral terms in Eq. (28), if the inequality (15) holds, then an upper bound of the last term in Eq. (29) can be obtained by utilizing reciprocally convex optimization approach [38]:
where \(\mathcal{S}_{1}\) is any matrix.
Thus, from (29) and (30) we have
By adding the two integral terms \(-2h_{U} \int^{t}_{t-h(t)} \int^{t}_{s} x^{T}(u) P_{3} x(u)\,du\,ds \) and \(-2h_{U} \int^{t-h(t)}_{t-h_{U}} \int^{t}_{s} x^{T}(u) P_{4} x(u)\,du\,ds \) to the results of \(\dot{V}_{5}(t)\), if the inequality (16) holds, then it can be obtained that
By utilizing Lemma 1 and reciprocally convex optimization approach [38], it can be confirmed that
where \(\mathcal{S}_{2}\) is any matrix and \(\eta(t)=h^{2}(t)/h^{2}_{U}\).
It should be noted that
From (32) to (34), the following inequality holds:
By utilizing (c) in Lemma 1, the estimation of \(\dot{V}_{6}\) can be
From (6), for any positive diagonal matrices \(H_{i}=\operatorname{diag}\{h_{1i},\ldots,h_{ni}\}\) (i=1,…,3), the following inequality holds:
where Θ is defined in (12).
Inspired by the work of [29], from (5), the following conditions hold:
For i=1,…,n, the above two conditions are equivalent to
Therefore, for any positive diagonal matrices \(H_{i}=\operatorname{diag}\{h_{1i},\ldots,h_{ni}\}\) (i=4,5), the following inequality holds:
From (17)–(41) and by applying the S-procedure from Ref. [43], an upper bound of \(\dot {V}(t)=\sum^{6}_{i=1} V_{i} (t)\) with the addition of (28) can be written as
By Lemma 2,
with 0=Γζ(t) is equivalent to
Then, by Lemma 3, the condition (43) is equivalent to the following inequality with any matrix \(\varPsi\in \mathbb {R}^{6n \times15n}\):
The above condition is affinely dependent on h(t). Therefore, if inequalities (13) and (14) hold, then inequality (44) is satisfied, which means that system (4) is asymptotically stable for 0≤h(t)≤h U and \(\dot{h}(t) \leq h_{D}\). This completes our proof. □
Remark 2
Unlike the previous works [11–30], the new augmented vector ζ(t) defined by Eq. (12) was utilized in Theorem 1 which includes the state vectors such as \(\int^{t}_{t-h(t)} \int^{t}_{s} f(x(u)) \,du \,ds\) and \(\int^{t-h(t)}_{t-h_{U}} \int^{t}_{s} f(x(u)) \,du \,ds\). These state vectors have not been utilized as an element of augmented vector. Furthermore, V 1, V 3, V 4, and V 5 have not been proposed yet in the previous works to stability analysis of neural networks with time-varying delays, which is the main difference between Theorem 1 and the methods in other literature. Thus, some new cross terms which may play a role to reduce the conservatism of stability condition were considered in stability criteria of (4).
Remark 3
It should be noted that the four zero equalities (24)–(27) are added in the results of \(\dot{V}(t)\) as shown in (42). Inspired by the work of [37], the two zero equalities in Eqs. (24) and (25) are proposed and utilized in Theorem 1 to enhance the feasible region of stability criterion. As presented in Eqs. (24) and (25), the quadratic terms such as (h U )(x T(t)P 1 x(t)−x T(t−h(t))P 1 x(t−h(t))) and (h U )(x T(t−h(t))P 2 x(t−h(t))−x T(t−h U )P 2 x(t−h U )) play a role in enhancing the feasible region of stability criterion. Also, as shown in (30), by merging the two integral terms \(-2(h_{U} ) \int^{t}_{t-h(t)} \dot{x}^{T}(s) P_{1} x(s)\,ds\) and \(-2(h_{U} ) \int^{t-h(t)}_{t-h_{U}} \dot{x}^{T}(s) P_{2} x(s)\,ds\) into the terms \(-(h_{U} )\int^{t}_{t-h(t)} \beta^{T}(s) \mathcal{Q}_{1} \beta(s) \,ds\) and \(-(h_{U} )\int^{t-h(t)}_{t-h_{U}} \beta^{T}(s) \mathcal{Q}_{1} \beta(s) \,ds\), respectively, the conservatism of stability criterion can be reduced. Furthermore, the two zero equalities (26) and (27) are proposed for the first time to increase the feasible region of the criterion. These zero equalities can be obtained from the fact that \(\int^{t}_{t-h} \int^{t}_{s} \dot{f}(u)\,du\,ds=hf(t) - \int^{t}_{t-h}f(s)\,ds\) with f(t)=x T(t)Px(t). The terms \(-(h_{U}) \int^{t}_{t-h(t)} x^{T}(s)P_{3} x(s)\,ds\) and \(-(h_{U}) \int^{t-h(t)}_{t-h(t)}x^{T}(s)P_{4} x(s)\,ds\) are merged into the results of \(\dot{V}_{4}(t)\) and the terms \(-2(h_{U}) \int^{t}_{t-h(t)}\int^{t}_{s} \dot{x}^{T} (u) P_{3} x(u)\,du\,ds\) and \(-2(h_{U}) \int^{t-h(t)}_{t-h_{U}}\int^{t}_{s} \dot{x}^{T} (u) P_{4} x(u)\,du\,ds\) are merged into the result of \(\dot{V}_{5}(t)\) as shown in (32).
Remark 4
In the proposed Theorem 1, the positiveness of V(t) is included such as \(\mathcal{R}>0\), \(\mathcal{N}>0\), Σ>0, Δ>0, \(\mathcal{G}>0\), \(\mathcal {Q}_{1}>0\), \(\mathcal{Q}_{2}>0\), and Q 3>0. These conditions guarantee the positiveness of each V i (t) (i=1,…,6). However, as mentioned in [39] and [40], by incorporating some functional of V(t), the positiveness of V(t) can be relaxed which will be introduced in Theorem 2.
For the sake of simplicity of matrix and vector representation in Theorem 2, \(\tilde{e}_{i}\ (i=1,\ldots,6) \in\mathbb{R}^{6n \times n}\), which will be used are defined as block entry matrices. (For example, \(\tilde{e}_{3} = [0_{n}, 0_{n}, I_{n}, 0_{n}, 0_{n}, 0_{n} ]^{T}\).) Assume that \(\mathcal{N}>0\), Σ>0, Δ>0, \(\mathcal{G}>0\), \(\mathcal{Q}_{1}>0\), \(\mathcal{Q}_{2}>0\), and Q 3>0. Then, V(t) has the following lower bound:
By (a) in Lemma 1, the lower bound of \(\int^{t}_{t-h_{U}} \beta^{T} (s) \mathcal{N} \beta(s)\,ds\) can be obtained as
By utilizing (b) in Lemma 1, the lower bound of the integral term \(h_{U} \int^{t}_{t-h_{U}}\int^{t}_{s} \beta^{T}(u) \mathcal{Q}_{1} \beta(u)\,du\,ds\) can be
Therefore, if the following inequality holds,
then the lower bound of V(t) can be guaranteed to be positive. Thus, by deleting the positiveness of the matrix \(\mathcal{R}\) and adding the inequality (48) into stability condition of Theorem 1, we have the following theorem.
Theorem 2
For given scalars h U >0, h D , and diagonal matrices \(K_{p}=\operatorname{diag}\{k^{+}_{1},\ldots ,k^{+}_{n}\}\) and \(K_{m}=\operatorname{diag}\{k^{-}_{1},\ldots,k^{-}_{n}\}\), system (4) is asymptotically stable for 0≤h(t)≤h U and \(\dot{h}(t) \leq h_{D} \), if there exist positive diagonal matrices \(\varSigma=\operatorname{diag}\{\sigma _{1},\ldots,\sigma_{n}\}\), \(\varDelta=\operatorname{diag}\{\delta_{1},\ldots,\delta_{n}\}\), \(H_{i}=\operatorname{diag}\{h_{1i},\ldots,h_{ni}\}\ (i=1,\ldots,5)\), symmetric matrices \(\mathcal{R} \in\mathbb{R}^{6n \times6n}\), positive definite matrices \(\mathcal{N} \in\mathbb{R}^{3n \times3n}\), \(\mathcal{G}=[G_{ij}] \in\mathbb{R}^{5n \times5n}\), \(\mathcal{Q}_{1} \in\mathbb{R}^{3n \times3n}\), \(\mathcal{Q}_{2} \in\mathbb{R}^{3n \times3n}\), \(Q_{3} \in\mathbb{R}^{n \times n}\), any matrices \(\mathcal{S}_{1}\in\mathbb{R}^{3n \times3n}\), \(\mathcal{S}_{2}\in\mathbb{R}^{3n \times3n}\), \(\varPsi\in\mathbb{R}^{6n \times15n}\), and any symmetric matrices \(P_{i} \in\mathbb{R}^{n \times n}\ (i=1,\ldots,4)\), satisfying the LMIs (13)–(16) and (48) where Φ [h(t)], \(\mathcal{P}_{i}\ (i=1,\ldots,4)\), Θ and Γ are defined as in (12), and Γ ⊥ is the right orthogonal complement of Γ.
Remark 5
When information about the upper bound of \(\dot{h}(t)\) is unknown, then Theorems 1 and 2 can provide delay-dependent stability criteria for 0≤h(t)≤h U by not considering V 3(t).
4 Numerical examples
In this section, two numerical examples are introduced to show the improvements of the proposed methods. In examples, MATLAB, YALMIP 3.0 and SeDuMi 1.3 are used to solve LMI problems.
Example 1
Consider the neural networks (4) where
When h D is 0.1, 0.5, 0.9, or unknown (or larger than one), the obtained maximum delay bounds applying Theorems 1 and 2 to the system (4) with the parameter (49) and the results in recent works of [20–23, 25, 26, 29] are listed in Table 1. It can be seen that the proposed Theorem 1 provides larger delay bounds than those of the methods with the delay-partitioning approach. Also, as mentioned in Remark 4, Theorem 2 improved the feasible region of stability criterion of Theorem 1.
Example 2
Consider the neural networks (4) with the parameters
In Table 2, when h D is 0.8, 0.9, or unknown (or larger than one), the comparison of maximum delay bounds obtained in Refs. [20, 21, 24, 26–30] and the results of our proposed methods are listed. From Table 2 it can be confirmed that Theorem 1 significantly increase the feasible region of stability criterion. Also, one can see, Theorem 2 provides larger feasible region than that of Theorem 1. For the system (4) with the above parameters, Fig. 1 shows, for the state responses x(t), when h(t)=3.4504|sin(t)|, f 1(x 1(t))=0.4tanh(x 1(t)), f 2(x 2(t))=0.8tanh(x 2(t)), and x(0)=[1,−1]T. This figure shows that the state signal converges to zero, which verifies the effectiveness of Theorem 2.
5 Conclusion
In this paper, two improved delay-dependent stability criteria for neural networks with time-varying delays have been proposed by the use of Lyapunov stability theorem and LMI framework. In Theorem 1, by constructing the newly augmented Lyapunov–Krasovskii functional, utilizing some new zero equalities and techniques mentioned in Remarks 3 and 4, a sufficient condition for asymptotic stability of the system was derived. By taking lower bound of Lyapunov–Krasovskii functional and utilizing the property of its positiveness, further improved stability condition were derived in Theorem 2. By two numerical examples dealt with in many previous works, the improvement of the feasible region of the two proposed stability criteria has been successfully verified. Based on the proposed Lyapunov–Krasovskii functional, future work will focus on state estimation [44], periodic solutions of neural networks [45], quasi-synchronization control for switched networks [46], dissipativity and quasi-synchronization with discontinuous activations and parameter mismatches [47], and so on.
References
Xu, S., Lam, J., Ho, D.W.C.: Novel global robust stability criteria for interval neural networks with multiple time-varying delays. Phys. Lett. A 342, 322–330 (2005)
Chua, L.O., Wang, L.: Cellular neural networks: applications. IEEE Trans. Circuits Syst. 35, 1273–1290 (1988)
Joya, G., Atencia, M.A., Sandoval, F.: Hopfield neural networks for optimization: study of the different dynamics. Neurocomputing 43, 219–237 (2002)
Liu, Y., Wang, Z., Liu, X.: Global exponential stability of generalized recurrent neural networks with discrete and distributed delays. Neural Netw. 19, 667–675 (2006)
Ozcan, N., Arik, S.: A new sufficient condition for global robust stability of bidirectional associative memory neural networks with multiple time delays. Nonlinear Anal., Real World Appl. 10, 3312–3320 (2009)
Faydasicok, O., Arik, S.: Equilibrium and stability analysis of delayed neural networks under parameter uncertainties. Appl. Math. Comput. 218, 6716–6726 (2012)
Faydasicok, O., Arik, S.: Further analysis of global robust stability of neural networks with multiple time delays. J. Franklin Inst. 349, 813–825 (2012)
Faydasicok, O., Arik, S.: An analysis of stability of uncertain neural networks with multiple time delays. J. Franklin Inst. 350, 1808–1826 (2013)
Xu, S., Lam, J., Ho, D.W.C., Zou, Y.: Delay-dependent exponential stability for a class of neural networks with time delays. J. Comput. Appl. Math. 183, 16–28 (2005)
Xu, S., Lam, J., Ho, D.W.C., Zou, Y.: Novel global asymptotic stability criteria for delayed cellular neural networks. IEEE Trans. Circuits Syst. II 52, 349–353 (2005)
Zhang, B., Xu, S., Zou, Y.: Improved delay-dependent exponential stability criteria for discrete-time recurrent neural networks with time-varying delays. Neurocomputing 72, 321–330 (2008)
Kwon, O.M., Park, J.H., Lee, S.M.: On robust stability for uncertain neural networks with interval time-varying delays. IET Control Theory Appl. 2, 625–634 (2008)
Syed Ali, M.: Novel delay-dependent stability analysis of Takagi–Sugeno fuzzy uncertain neural networks with time varying delays. Chin. Phys. B 21, 070207 (2012)
Syed Ali, M.: Robust stability analysis of Takagi–Sugeno uncertain stochastic fuzzy recurrent neural networks with mixed time-varying delays. Chin. Phys. B 20, 080201 (2011)
Syed Ali, M.: Robust stability of stochastic uncertain recurrent neural networks with Markovian jumping parameters and time-varying delays. Int. J. Mach. Cybern. (2012). doi:10.1007/s13042-012-0124-6
Balasubramaniam, P., Lakshmanan, S.: Delay-range dependent stability criteria for neural networks with Markovian jumping parameters. Nonlinear Anal. Hybrid Syst. 3, 749–756 (2009)
Balasubramaniam, P., Lakshmanan, S., Rakkiyappan, R.: Delay-interval dependent robust stability criteria for stochastic neural networks with linear fractional uncertainties. Neurocomputing 72, 3675–3682 (2009)
Li, T., Guo, L., Wu, L., Sun, C.: Delay-dependent robust stability criteria for delay neural networks with linear fractional uncertainties. Int. J. Control. Autom. Syst. 7, 281–287 (2009)
Li, T., Zheng, W.X., Lin, C.: Delay-slope-dependent stability results of recurrent neural networks. IEEE Trans. Neural Netw. 22, 2138–2143 (2011)
Zhu, X.L., Yang, G.H.: New delay-dependent stability results for neural networks with time-varying delay. IEEE Trans. Neural Netw. 19, 1783–1791 (2008)
Zhang, Y., Yue, D., Tian, E.: New stability criteria of neural networks with interval time-varying delays: a piecewise delay method. Appl. Math. Comput. 208, 249–259 (2009)
Xiao, S.-P., Zhang, X.-M.: New globally asymptotic stability criteria for delayed neural networks. IEEE Trans. Circuits Syst. II 56, 659–663 (2009)
Zhang, H., Liu, Z., Hung, G.-B., Wang, Z.: Novel weighting-delay-based stability criteria for recurrent neural networks with time-varying delay. IEEE Trans. Neural Netw. 21, 91–106 (2010)
Li, T., Song, A., Fei, S., Wang, T.: Delay-derivative-dependent stability for delayed neural networks with unbounded distributed delay. IEEE Trans. Neural Netw. 21, 1365–1371 (2010)
Tian, J., Xie, X.: New asymptotic stability criteria for neural networks with time-varying delay. Phys. Lett. A 374, 938–943 (2010)
Kwon, O.M., Lee, S.M., Park, J.H.: Improved results on stability analysis of neural networks with time-varying delays: novel delay-dependent criteria. Mod. Phys. Lett. B 24, 775–789 (2010)
Tian, J., Zhong, S.: Improved delay-dependent stability criterion for neural networks with time-varying delay. Appl. Math. Comput. 217, 10278–10288 (2011)
Wang, Y., Yang, C., Zuo, Z.: On exponential stability analysis for neural networks with time-varying delays and general activation functions. Commun. Nonlinear Sci. Numer. Simul. 17, 1447–1459 (2012)
Kwon, O.M., Park, J.H., Lee, S.M., Cha, E.J.: Analysis on delay-dependent stability for neural networks with time-varying delays. Neurocomputing 103, 114–120 (2013)
Zhang, H., Yang, F., Zhang, Q.: Stability analysis for neural networks with time-varying delay based on quadratic convex combination. IEEE Trans. Neural Netw. Learn. Syst. (2013). doi:10.1109/TNNLS.2012.2236571
Gu, K.: A further refinement of discretized Lyapunov functional method for the stability of time-delay systems. Int. J. Control 74, 967–976 (2001)
Yue, D., Won, S., Kwon, O.: Delay dependent stability of neutral systems with time delay: an LMI approach. IEE Proc., Control Theory Appl. 150, 23–27 (2003)
Kwon, O.M., Park, J.H.: On improved delay-dependent robust control for uncertain time-delay systems. IEEE Trans. Autom. Control 49, 1991–1995 (2004)
He, Y., Liu, G.P., Ree, D., Wu, M.: Stability analysis for neural networks with time-varying interval delay. IEEE Trans. Neural Netw. 18, 1850–1854 (2007)
Ariba, Y., Gouaisbaut, F.: An augmented model for robust stability analysis of time-varying delay systems. Int. J. Control 82, 1616–1626 (2009)
Sun, J., Liu, G.P., Chen, J., Rees, D.: Improved delay-range- dependent stability criteria for linear systems with time-varying delays. Automatica 46, 466–470 (2010)
Kim, S.H., Park, P., Jeong, C.: Robust H ∞ stabilization of networked control systems with packet analyzer. IET Control Theory Appl. 4, 1828–1837 (2010)
Park, P.G., Ko, J.W., Jeong, C.: Reciprocally convex approach to stability of systems with time-varying delays. Automatica 47, 235–238 (2011)
Wang, T., Zhang, C., Fei, S., Li, T.: Further stability criteria on discrete-time delayed neural networks with distributed delay. Neurocomputung 111, 195–203 (2013)
Xu, S., Lam, J., Zhang, B., Zou, Y.: A new result on the delay-dependent stability of discrete systems with time-varying delays. Int. J. Robust Nonlinear Control (2013). doi:10.1002/rnc.3006
Qian, W., Cong, S., Li, T., Fei, S.: Improved stability conditions for systems with interval time-varying delay. Int. J. Control. Autom. Syst. 10, 1146–1152 (2012)
Skelton, R.E., Iwasaki, T., Grigoradis, K.M.: A Unified Algebraic Approach to Linear Control Design. Taylor & Francis, New York (1997)
Boyd, S., El Ghaoui, L., Feron, E., Balakrishnan, V.: Linear Matrix Inequalities in System and Control Theory. SIAM, Philadelphia (1994)
Liu, X., Cao, J.: Robust state estimation for neural networks with discontinuous activations. IEEE Trans. Syst. Man Cybern., Part B 24, 1013–1021 (2011)
Liu, X., Cao, J., Yu, W.: Filippov systems and quasi-synchronization control for switched networks. Chaos 22, 033110 (2012)
Liu, X., Cao, J.: On periodic solutions of neural networks via differential inclusions. Neural Netw. 22, 329–334 (2009)
Liu, X., Chen, T., Lu, W.: Dissipativity and quasi-synchronization for neural networks with discontinuous activations and parameter mismatches. Neural Netw. 24, 1013–1021 (2011)
Acknowledgements
This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2008-0062611), and by a Grant of the Korea Healthcare Technology R & D Project, Ministry of Health & Welfare, Republic of Korea (A100054).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Kwon, O.M., Park, J.H., Lee, S.M. et al. New augmented Lyapunov–Krasovskii functional approach to stability analysis of neural networks with time-varying delays. Nonlinear Dyn 76, 221–236 (2014). https://doi.org/10.1007/s11071-013-1122-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11071-013-1122-2