Abstract
In the previous work, a typical recurrent neural network termed Zhang neural network (ZNN) has been developed for various time-varying problems solving. Based on the previous work, by exploiting a special activation function (i.e., Li activation function), the resultant ZNN model is presented and investigated in this paper for online solution of time-varying linear matrix inequality (TVLMI). For such a Li-function activated ZNN (LFAZNN) model, theoretical results are provided to show its excellent computational performance on solving the TVLMI. That is, the presented LFAZNN model has the property of finite-time convergence. Comparative simulation results with two illustrative examples further substantiate the efficacy of the presented LFAZNN model for TVLMI solving.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Recently, linear matrix inequality (LMI) has played a more and more important role in science and engineering areas [1,2,3,4,5,6]. The LMI-based approaches have been designed and used in many fields, such as stability analysis of fuzzy control system, robustness analysis of neural network, and gain scheduled controller design. The LMI has been now considered as a powerful formulation and design technique for solving different types of problems [1,2,3,4,5,6,7,8,9,10,11,12,13]. For example, Lyapunov matrix inequality is constructed to analyze the stability and performance of delayed dynamical systems [6,7,8,9]. As a powerful tool, the LMI-based technique can also be applicable for solving the system identification problems, the nonconvex multi-criterion optimal control problems, and the inverse problems of optimal control [9, 13]. Generally speaking, these problems are transformed into the related LMI problems, which are then solved by using neural-network models and/or numerical algorithms [7,8,9, 13]. Given its importance, an effective model/algorithm for solving a class of LMIs is worth developing and investigating.
To solve a class of LMIs (e.g., Lyapunov matrix inequalities and algebraic Riccati matrix inequalities), the gradient-based neural networks (which can be viewed as the common way) were developed by Lin et al. in [7] and [8]. In [9], a simplified neural network for a class of LMI problems solving was presented by Cheng et al. Besides, the dynamic system methods were provided by Hu [14] for solving mixed LMIs and linear vector inequalities and equalities. Note that the intrinsical design of these neural networks and methods is for time-invariant LMIs solving. As many systems in engineering applications are time-varying, the corresponding LMIs may also be time-varying ones (i.e., the coefficients always change as time t evolves). When the aforementioned neural networks and methods are exploited directly for time-varying LMIs solving, the time-derivative information of coefficients is not involved. They may be less effective and efficient due to the lack of consideration for such an important information [13, 15,16,17].
Since March 2001, a typical recurrent neural network termed Zhang neural network (ZNN) has been formally proposed by Zhang et al. for various time-varying problems solving [16, 18]. The general procedure of the ZNN design is presented in the Appendix. For better understanding, the concept of ZNN [13] is given as follows.
Concept Being a typical recurrent neural network, Zhang neural network (ZNN) has been developed and studied since 2001. It originates from the research of Hopfield-type neural networks and is a systematic approach for time-varying problems solving. Such a ZNN is different from the traditional gradient-based neural network(s) in terms of the problem to be solved, indefinite error function, exponent-type design formula, dynamic equation, and the utilization of time-derivative information.
By following the inspirational work of Zhang et al. [16, 18], two different ZNN models for time-varying LMI (TVLMI) solving were proposed and investigated in the authors’ previous work [13, 17]. Specifically, in [17], the first ZNN model was developed by exploiting the variant of the ZNN design formula. In [13], the second ZNN model was developed by means of the conversion from inequality to equality. For such two ZNN models, the former is depicted in an implicit dynamics, while the latter is depicted in an explicit dynamics. Theoretical and simulative results were also provided in [17] and [13] to further show the efficacy of the presented two ZNN models for TVLMI solving. By following the researches of [13] and [17], another ZNN model was developed by Sun et al. in [19] for solving time-varying Stein matrix inequality (which, to some extent, shows the significance of the ZNN research community). All of the results in [13, 17] and [19] also indicate that different choices of activation functions result in different convergence performances for a ZNN model. That is, the activation function affects greatly the ZNN convergence.
In [20], a special activation function termed Li activation function was designed by Li et al. to accelerate the ZNN model to finite-time convergence. The recent research on ZNN [21,22,23,24] has further revealed that the finite-time convergence of ZNN models can be achieved by exploiting a specifically-designed activation function. Motivated by the idea in [20], this paper presents a ZNN model with finite-time convergence on TVLMI solving by employing the Li activation function. Evidently, the convergence property of such a model is different from that of the existing ZNN model [13, 17, 19] using the previously-investigated activation functions (e.g., power-sigmoid and hyperbolic-sine activation functions). It is also worth pointing out here that at present the research on TVLMI solving is rare (though it can be an appealing topic), not to mention a computational model with finite-time convergence.
Being an extension of the successful work [13, 17, 19], this paper presents and investigates the Li-function activated ZNN (LFAZNN) model for online solution of TVLMI. Specifically, by defining an indefinite matrix-valued error function and by using the ZNN design formula, the ZNN model depicted in an explicit dynamics is developed for TVLMI solving. For such a ZNN model, employing Li activation function [20] results in the LFAZNN model. Then, it is theoretically proven that the presented LFAZNN model has the property of finite-time convergence. Simulation results with two illustrative examples further substantiate the efficacy of the presented LFAZNN model for TVLMI solving.
The rest of this paper is organized into four sections. Section 2 presents the formulations of the TVLMI problem and the ZNN model. In Sect. 3, the LFAZNN model is presented and investigated for TVLMI solving, together with theoretical results showing the computational performance of such an LFAZNN model. In Sect. 4, as synthesized by the presented LFAZNN model, simulation results are illustrated. Section 5 concludes this paper with final remarks. Before ending this section, it is worth summarizing and listing the main contributions of this paper as follows.
-
This paper presents the Li-function activated ZNN (LFAZNN) model for online solution of time-varying linear matrix inequality (TVLMI). Note that Li activation function is different from the activation functions investigated in [13, 17, 19].
-
This paper gives theoretical result to show the finite-time convergence of the presented LFAZNN model. This is the first time to provide the finite-time exact solution to TVLMI.
-
This paper illustrates comparative simulation results to further substantiate the efficacy of the presented LFAZNN model for TVLMI solving.
2 Time-Varying Linear Matrix Inequality
In this section, the formulation of the TVLMI problem is presented. Then, by defining an indefinite matrix-valued error function and by using the ZNN design formula [13, 15,16,17,18,19], the resultant ZNN model is developed for TVLMI solving.
Specifically, in this paper, the following TVLMI problem is considered:
where \(A(t)\in R^{m\times m}\) and \(B(t)\in R^{m\times n}\) are smoothly time-varying matrices, and \(X(t)\in R^{m\times n}\) is the unknown matrix to be obtained. The objective of this paper is to find a feasible solution X(t) such that (1) holds true for any time instant \(t\geqslant 0\). Note that, in (1), the symbol “\(\leqslant \)”(between two matrices) is satisfied, if every element of one matrix is less than or equal to the corresponding element of the other matrix. Besides, to lay a basis for further discussion, A(t) is assumed to be nonsingular at any time instant \(t\in [0,+\infty )\) in this paper.
To solve the TVLMI (1), the indefinite matrix-valued error function is defined as follows:
where \(\varLambda ^{2}(t)=\varLambda (t)\odot \varLambda (t)\) with the time-varying matrix \(\varLambda (t)\) being
Note that the multiplication operator \(\odot \) is the Hadamard product [25]. Thus, the time-varying matrix \(\varLambda ^{2}(t)\) in (2) is written as below:
of which each element is greater than or equal to zero, i.e., \(\varLambda ^{2}(t)\geqslant 0\). Evidently, when \(E(t)=0\), the following result is obtained:
That is to say, solving the TVLMI (1) can be equivalent to solving the time-varying matrix equation \(A(t)X(t)-B(t)+\varLambda ^{2}(t)=0\).
Besides, for further discussion, the relationship between matrices \(\varLambda (t)\) and \(\varLambda ^{2}(t)\) is reformulated as
where operator \(\text {vec}(\cdot )\in R^{mn}\) generates a column vector obtained by stacking all column vectors of a matrix [13, 15, 16]. In addition, the diagonal matrix D(t) is defined as follows:
with the ith (with \(i=1,\ldots ,n\)) block matrix being
By defining \(v(t)=\text {vec}(\varLambda (t))\in R^{mn}\), the time derivative of \(\text {vec}(\varLambda ^{2}(t))\) is formulated as
with \({\dot{v}}(t)\) being the time derivative of v(t). Furthermore, by defining \(u(t)=\text {vec}(X(t))\in R^{mn}\) and \(w(t)=\text {vec}(B(t))\in R^{mn}\), the error function (2) is reformulated as follows:
where \(M(t)=I\otimes A(t)\in R^{mn\times mn}\) with \(I\in R^{n\times n}\) being the identity matrix and \(\otimes \) denoting the Kronecker product [13, 15, 16].
To make the error function \({\tilde{E}}(t)\) converge to zero, the time derivative of \({\tilde{E}}(t)\), i.e., \(\dot{{\tilde{E}}}(t)\), is chosen via the following form (i.e., the so-called ZNN design formula) [13, 15,16,17,18,19]:
where design parameter \(\gamma >0\in R\), being the reciprocal of a capacitance parameter [26], is used to scale the convergence rate of the solution, and \({\mathcal {F}}(\cdot )\) denotes the activation-function array. Note that, in general, the value of \(\gamma \) should be set as large as the hardware system would permit, or selected appropriately for simulation purposes [26]. In addition, function \(f(\cdot )\), being a processing element of \({\mathcal {F}}(\cdot )\), can be any monotonically increasing odd activation function, e.g., the linear, power-sigmoid, power-sum and hyperbolic-sine activation functions [13, 15,16,17,18,19].
Based on (3) and (4), the following dynamic equation of a ZNN model is established:
where \({\dot{u}}(t)\), \({\dot{M}}(t)\) and \({\dot{w}}(t)\) are the time derivatives of u(t), M(t) and w(t), respectively. As for (5), it is reformulated as
Thus, by defining the augmented vector \(y(t)=[u^{\text {T}}(t),v^{\text {T}}(t)]^{\text {T}}\in R^{2mn}\), (6) is further rewritten as follows:
where \({\dot{y}}(t)\) is the time derivative of y(t). In addition, the augmented matrices \(W(t)\in R^{mn\times 2mn}\), \(Q(t)\in R^{mn\times 2mn}\) and \(P(t)\in R^{mn\times 2mn}\) are defined respectively as below:
To make (7) more computable, we can reformulate (7) as the following explicit form:
where \(W^{\dag }(t)=W^{\text {T}}(t)(W(t)W^{\text {T}}(t))^{-1}\in R^{2mn\times mn}\). Therefore, based on the error function (2), the ZNN model (8) is obtained for TVLMI solving. For (8), the following theoretical result is provided, of which the proof can be generalized from [13, 17,18,19].
Theorem 1
Given a smoothly time-varying nonsingular coefficient matrix \(A(t)\in R^{m\times m}\) and a smoothly time-varying coefficient matrix \(B(t)\in R^{m\times n}\) in (1), if a monotonically increasing odd activation function array \({\mathcal {F}}(\cdot )\) is used, then ZNN model (8) generates an exact time-varying solution of the TVLMI (1).
3 Li-Function Activated ZNN Model
In this section, by exploiting Li activation function in [20], the Li-function activated ZNN (LFAZNN) model is developed and investigated for TVLMI solving.
Specifically, Li activation function originated from [20] is presented as follows:
where \({\tilde{e}}_{i}\) is the ith element of \({\tilde{E}}(t)\) depicted in (3) and \(\tau \in (0,1)\). In addition, \(\text {Lip}^{\tau }(\cdot )\) is defined as
with symbol \(|\cdot |\) denoting the absolute value of a scalar.
Thus, by exploiting Li activation function array \(\varPsi (\cdot )\) [with \(\psi (\cdot )\) being its element], the following LFAZNN model for TVLMI solving is established:
As for such an LFARNN model (9), the theoretical result on the general convergence property is presented as follows, which can be generalized from Theorem 1.
Corollary
Given a smoothly time-varying nonsingular coefficient matrix \(A(t)\in R^{m\times m}\) and a smoothly time-varying coefficient matrix \(B(t)\in R^{m\times n}\) in (1), the presented LFAZNN model (9) generates an exact time-varying solution of the TVLMI (1).
As to the presented LFAZNN model (9), its the neural-network structure is given in Fig. 1. In such a figure, the ith (with \(i=1,2,\ldots ,2mn\)) neuron of (9) is written as follows:
where \(\nu =2\mu =2mn\). \(\hat{w}_{ik}\), \(q_{kj}\), and \(p_{kj}\) denote time-varying weights that correspond to the ikth element of \(W^{\dag }(t)\), the kjth element of Q(t), and kjth element of P(t). \({\dot{w}}_{k}\) and \(w_{k}\) denote time-varying thresholds that correspond to the kth elements of \({\dot{w}}(t)\) and w(t), respectively. As shown in Fig. 1, there can be there layers for the structure of the presented LFAZNN model (9). The ith output of the first layer is \(\sum _{j=1}^{\nu }q_{kj}y_{j}+{\dot{w}}_{k}-\gamma \psi (\sum _{j=1}^{\nu }p_{kj}y_{j}-w_{k})\), whereas the ith output of the second layer is \(\sum _{k=1}^{\mu }\hat{w}_{ik}(\sum _{j=1}^{\nu }q_{kj}y_{j}+{\dot{w}}_{k}-\gamma \psi (\sum _{j=1}^{\nu }\)\(p_{kj}y_{j}-w_{k}))\), i.e., \({\dot{y}}_{i}\) (the time-derivative of \(y_{i}\)). The third layer possesses the integral action, and thus the ith output of such a layer is \({y}_{i}\). Given an initial state, the state of \(\{y_{1},y_{2},\ldots ,y_{\nu }\}\) in Fig. 1 evolves as time elapses, and finally reaches an equilibrium state. At this time, the output of the third layer is the solution of the TVLMI (1), showing that the presented LFAZNN model (9) can generate an time-varying solution of (1).
Besides, as for the presented LFAZNN model (9), it has the property of finite-time convergence for TVLMI solving.
Theorem 2
Given a smoothly time-varying nonsingular coefficient matrix \(A(t)\in R^{m\times m}\) and a smoothly time-varying coefficient matrix \(B(t)\in R^{m\times n}\) in (1), the error function E(t) of the presented LFAZNN model (9) can converge to zero within finite time, which corresponds to the finite-time convergence of X(t) [whose vectorization form is the first part of the state vector y(t) of (9)] to a time-varying theoretical solution of TVLMI (1).
Proof
As for the presented LFAZNN model (9), it can be reformulated as follows:
where \({\tilde{E}}(t)\) is depicted in (3). Be defining \({\tilde{e}}_{i}(t)\) as the ith element of \({\tilde{E}}(t)\) (with \(i=1,2,\ldots ,mn\)), we have the following decoupled differential equation:
where \(\dot{{\tilde{e}}}_{i}(t)\) denotes the time derivative of \({\tilde{e}}_{i}(t)\).
For further discussion, \({\tilde{E}}(0)=M(0)u(0)-w(0)+D(0)v(0)\) is denoted as the initial value of \({\tilde{E}}(t)\) in (3). In addition, \({\tilde{e}}^+(t)\) is denoted as the element in \({\tilde{E}}(t)\) with the largest initial value \({\tilde{e}}^+(0)=\max \{{\tilde{E}}(0)\}\), and \({\tilde{e}}^-(t)\) is denoted as the element in \({\tilde{E}}(t)\) with the smallest initial value \({\tilde{e}}^-(0)=\min \{{\tilde{E}}(0)\}\).
Based on the comparison lemma [20], since \({\tilde{e}}^+(0)=\max \{{\tilde{E}}(0)\}\geqslant {\tilde{e}}_{i}(0)\) with \(i=1,2,\cdots ,mn\), then we have \({\tilde{e}}^+(t)\geqslant {\tilde{e}}_{i}(t)\) with \(t\geqslant 0\). Similarly, we have \({\tilde{e}}^-(t)\leqslant {\tilde{e}}_{i}(t)\) with \(t\geqslant 0\). Thus, for all possible i, the following result is obtained, as time t goes on:
which means that \({\tilde{e}}_{i}(t)\) converges to zero when both \({\tilde{e}}^+(t)\) and \({\tilde{e}}^-(t)\) reach zero. That is to say, for the presented LFAZNN model (9), its convergence time \(t_\text {c}\) is bounded by the larger one between the dynamics of \({\tilde{e}}^+(t)\) and that of \({\tilde{e}}^-(t)\). In mathematics, \(t_\text {c}\leqslant \max \{t_\text {c}^+,t_\text {c}^-\}\), where \(t_\text {c}^+\) and \(t_\text {c}^-\) denote the convergence time corresponding to \({\tilde{e}}^+(t)\) and \({\tilde{e}}^-(t)\).
Evidently, \(t_\text {c}^+\) and \(t_\text {c}^-\) should be calculated first so as to obtain \(t_\text {c}\). To calculate \(t_\text {c}^+\), we have
Defining a Lyapunov function candidate \(L(t)=|{\tilde{e}}^+(t)|^2\), we have its time derivative as
By solving \({\dot{L}}(t)\leqslant -\gamma L^{\frac{\tau +1}{2}}(t)\) with the initial condition \(L(0)=|{\tilde{e}}^+(0)|^2\), the following result is obtained:
This implies that L(t) converges to zero at most after a time period \({2|{\tilde{e}}^+(0)|^{1-\tau }}/{\gamma (1-\tau )}\). Therefore, \({\tilde{e}}^+(t)\) reaches zero for \(t>{2|{\tilde{e}}^+(0)|^{1-\tau }}/{\gamma (1-\tau )}\), i.e., the convergence time for \({\tilde{e}}^+(t)\) is \(t_\text {c}^+<{2|{\tilde{e}}^+(0)|^{1-\tau }}/{\gamma (1-\tau )}\). By following the similar analysis, we have \(t_\text {c}^-<{2|{\tilde{e}}^-(0)|^{1-\tau }}/{\gamma (1-\tau )}\). Thus, the convergence time for \({\tilde{E}}(t)\) is obtained as follows:
Note that \({\tilde{E}}(t)\) is the vectorization form of the error function E(t) depicted in (2). By summarizing the above analysis, the error function E(t) of the presented LFAZNN model (9) can converge to zero within finite time \(t_\text {c}\), which corresponds to the finite-time convergence of X(t) [whose vectorization form is the first part of the state vector y(t) of (9)] to a time-varying theoretical solution of TVLMI (1). The proof in thus completed. \(\square \)
4 Simulative Verifications
In the previous section, the LFAZNN model (9) and its finite-time convergence property have been presented and analyzed to solve TVLMI (1). In this section, simulation results with two illustrative examples are provided to substantiate the efficacy of the presented LFAZNN model (9) for TVLMI solving.
Example 1
In the first example, let us consider TVLMI (1) with smoothly time-varying coefficient matrices A(t) and B(t) being as follows:
The simulation results synthesized by the presented LFAZNN model (9) are illustrated in Figs. 2, 3, 4 and 5.
Specifically, Fig. 2 shows the neural states of the presented LFAZNN model (9) with \(\tau =0.2\) and \(\gamma =1\). As seen from Fig. 2, the trajectories of X(t) [being the first four elements of y(t) in (9)] and \(\varLambda (t)\) [being the rest elements of y(t)], starting from five randomly-generated initial states, are time-varying. In addition, Fig. 3a shows the characteristics of residual error \(\Vert P(t)y(t)-w(t)\Vert _{2}=\Vert A(t)X(t)-B(t)+\varLambda ^{2}(t)\Vert _{\text {F}}\) (with \(\Vert \cdot \Vert _{\text {2}}\) denoting the two norm of a vector and \(\Vert \cdot \Vert _{\text {F}}\) denoting the Frobenius norm of a matrix), from which we can observe that the residual errors of the presented LFAZNN model (9) are all convergent to zero. This means that the solutions of X(t) and \(\varLambda (t)\) shown in Fig. 2 are the time-varying solutions to \(A(t)X(t)-B(t)+\varLambda ^{2}(t)=0\). Given that \(-\varLambda ^{2}(t)\leqslant 0\), \(A(t)X(t)=B(t)-\varLambda ^{2}(t)\leqslant B(t)\), showing that the X(t) solution is an exact time-varying solution of TVLMI (1). For better understanding, Fig. 3b illustrates the profiles of the testing error function \(\varepsilon (t)=A(t)X(t)-B(t)\). As shown in Fig. 3b, all the elements of \(\varepsilon (t)\) are less than or equal to zero, i.e., \(A(t)X(t)-B(t)\leqslant 0\). Evidently, these simulation results substantiate the efficacy of the presented LFAZNN model (9) for TVLMI solving. Besides, it is worth pointing out here that, as shown in Fig. 3a, instead of converging exponentially to zero, the residual errors of (9) decrease directly to zero within finite time. This result of finite-time convergence is quite from the convergence results in the previous work [13, 17, 19].
By increasing the value of \(\gamma \) (i.e., from 1 to 10), the presented LFAZNN model (9) is simulated and the related results are shown in Fig. 4. As observed from such a figure [especially, Fig. 4b], the problem of TVLMI (1) is solved effectively via (9) in terms of residual errors converging to zero rapidly. This statement that the X(t) trajectories in Fig. 4a satisfy \(A(t)X(t)\leqslant B(t)\); i.e., they are the solutions to TVLMI (1). These results substantiate again the efficacy of the presented LFAZNN model (9) for TVLMI solving. Furthermore, a comparison of Figs. 3a and 4b shows that the convergence performance of (9) is improved, as the value of \(\gamma \) increases. This result indicates that superior computational performance of the presented LFAZNN model (9) (including its finite-time convergence) can be achieved by increasing \(\gamma \).
Besides, for comparison and to provide an intuitive result on finite-time convergence, the ZNN model (8) using different types of activation functions (i.e., the linear, hyperbolic-sine, power-sigmoid, power-sum, and Li activation functions [13, 15,16,17,18,19,20]) are simulated to solve TVLMI (1). The corresponding results are shown in Fig. 5, where \(\tau =0.2\) for the LFAZNN model (9). Evidently, these results indicate that the TVLMI (1) is solved successfully via (8); i.e., the residual errors are all convergent to zero. In addition, the following results are observed and summarized from Fig. 5.
-
The convergence performance of the ZNN model (8) [including the LFAZNN model (9)] is improved effectively by increasing the value of \(\gamma \). Thus, the important role of \(\gamma \) is confirmed again.
-
The ZNN model (8) using the hyperbolic-sine, power-sigmoid and power-sum activation functions possesses better convergence than the one using the linear activation function.
-
The LFAZNN model (9) possesses a finite-time convergence, i.e., the corresponding residual error decreases directly to zero within finite time instead of converging to zero when \(t\rightarrow \infty \).
Based on these results, the LFAZNN model (9) is advantageous over the ZNN model (8) using the previously-investigated activation functions [13, 15,16,17,18,19] because it has the property of finite-time convergence on TVLMI solving. Notably, using Li activation function, the ZNN model for solving other time-varying problems [21,22,23,24] still processes the finite-time convergence. In addition, based on this activation function, more activation functions can be constructed to further improve the ZNN performance [27, 28]. This can be viewed as the generalizability of Li activation function when comparing to those in [13, 15,16,17,18,19].
In summary, the above simulation results (i.e., Figs. 2, 3, 4 and 5) have substantiated the efficacy of the presented LFAZNN model (9) for online solution of TVLMI (1), and more importantly, its property of finite-time convergence (in terms of the residual error decreasing directly to zero within finite time).
Example 2
In the second example, we investigate how the value of parameter \(\tau \) affects the convergence performance of the presented LFAZNN model (9). The time-varying coefficient matrices A(t) and B(t) of TVLMI (1) in this example are designed as follows:
The corresponding simulation results are illustrated in Figs. 6 and 7.
Specifically, as synthesized by the presented LFAZNN model (9) with \(\tau =0.4\) and \(\gamma =5\), Fig. 6a shows the trajectories of X(t) [being the first six elements of y(t) in (9)] that are time-varying. Figure 6b shows the characteristics of residual error \(\Vert P(t)y(t)-w(t)\Vert _{2}=\Vert A(t)X(t)-B(t)+\varLambda ^{2}(t)\Vert _{\text {F}}\), which indicates that the residual errors of (9) decrease directly to zero within finite time. These results substantiate the efficacy (as well as the finite-time convergence property) of the presented LFAZNN model (9) for TVLMI solving.
By changing the value of \(\tau \) (i.e., \(\tau =0.6\) and \(\tau =0.8\)), the presented LFAZNN model (9) is simulated to solve the aforementioned TVLMI. The related results are shown in Fig. 7, which indicate that (9) is effective on solving the TVLMI (1) in terms of the residual errors being convergent to zero rapidly and within finite time. The finite-time convergence property of (9) for TVLMI solving is thus substantiated once again. Figs. 6b and 7 are also compared, and the finding shows that the convergence times of (9) in these figures are about 0.7277 s, 0.8778 s, and 1.5071 s, respectively. This scenario suggests that decreasing the value of \(\tau \) can enhance the finite-time convergence performance of the presented LFAZNN model (9). Therefore, \(\tau \) in (9) should be set appropriately small [20] to ensure the residual error of (9) converging to zero in a short period of time.
Remark
It follows from Figs. 3, 4, 5, 6 and 7 that the convergence time of the presented LFAZNN model (9) can be expedited by increasing the value of \(\gamma \), and can be further shorten by decreasing the value of \(\tau \). Thus, to achieve superior convergence for (9), parameter \(\gamma \) should be set appropriately large, while parameter \(\tau \) should be set appropriately small.
5 Conclusions
In this paper, by exploiting Li activation function, the LFAZNN model (9) has been presented and investigated for online solution of TVLMI (1). Then, theoretical results have been provided to show the excellent computational performance of the presented LFAZNN model (9). That is, such an LFAZNN model has the property of finite-time convergence on solving TVLMI. Comparative simulation results with two illustrative examples have been illustrated to further substantiate the efficacy of the presented LFAZNN model (9) for TVLMI solving, as well as its finite-time convergence property (in terms of the residual error decreasing directly to zero within finite time).
References
Kim E, Kang HJ, Park M (1999) Numerical stability analysis of fuzzy control systems via quadratic programming and linear matrix inequalities. IEEE Trans Syst, Man, Cybern, Part A 29(4):333–346
Chesi G, Garulli A, Tesi A, Vicino A (2003) Solving quadratic distance problems: An LMI-based approach. IEEE Trans Autom Control 48(2):200–212
Chesi G (2010) LMI techniques for optimization over polynomials in control: A survey. IEEE Trans Autom Control 55(11):2500–2510
Jing X (2012) Robust adaptive learning of feedforward neural networks via LMI optimizations. Neural Netw 31:33–45
Boyd S, Ghaoui LE, Feron E, Balakrishnan V (1994) Linear Matrix Inequalities in System and Control Theory. SIAM, Philadelphia
Liao X, Chen G, Sanchez EN (2002) Delay-dependent exponential stability analysis of delayed neural networks: An LMI approach. Neural Netw 15(7):855–866
Lin C-L, Lai C-C, Huang T-H (2000) A neural network for linear matrix inequality problems. IEEE Trans Neural Netw 11(5):1078–1092
Lin C-L, Huang T-H (2000) A novel approach solving for linear matrix inequalities using neural networks. Neural Process Lett 11(2):153–169
Cheng L, Hou ZG, Tan M (2009) A simplified neural network for linear matrix inequality problems. Neural Process Lett 29(3):213–230
Su T-J, Huang M-Y, Hou C-L, Lin Y-J (2010) Cellular neural networks for gray image noise cancellation based on a hybrid linear matrix inequality and particle swarm optimization approach. Neural Process Lett 32(2):147–165
Tan M (2016) Stabilization of coupled time-delay neural networks with nodes of different dimensions. Neural Process Lett 43(1):255–268
Wu H, Zhang X, Xue S, Wang L, Wang Y (2016) LMI conditions to global Mittag-Leffler stability of fractional-order neural networks with impulses. Neurocomputing 193:148–154
Guo D, Zhang Y (2014) Zhang neural network for online solution of time-varying linear matrix inequality aided with an equality conversion. IEEE Trans Neural Netw Learn Syst 25(2):370–382
Hu X (2010) Dynamic system methods for solving mixed linear matrix inequalities and linear vector inequalities and equalities. Appl Math Comput 216(4):1181–1193
Zhang Y, Guo D (2015) Zhang Functions and Various Models. Springer-Verlag, Heidelberg
Zhang Y, Yi C (2011) Zhang Neural Networks and Neural-Dynamic Method. Nova Science Publishers, New York
Guo D, Zhang Y (2012) A new variant of the Zhang neural network for solving online time-varying linear inequalities. Proc R Soc A. 468(2144):2255–2271
Zhang Y, Jiang D, Wang J (2002) A recurrent neural network for solving Sylvester equation with time-varying coefficients. IEEE Trans Neural Netw 13(5):1053–1063
Sun J, Wang S, Wang K (2016) Zhang neural networks for a set of linear matrix inequalities with time-varying coefficient matrix. Inf Process Lett 116(10):603–610
Li S, Chen S, Liu B (2013) Accelerating a recurrent neural network to finite-time convergence for solving time-varying Sylvester equation by usinga sign-bi-power activation function. Neural Process Lett 37(2):189–205
Zhang Y, Ding Y, Qiu B, Zhang Y, Li X (2017) Signum-function array activated ZNN with easier circuit implementation and finite-time convergence for linear systems solving. Inf Process Lett 124:30–34
Jin L, Li S, Wang H, Zhang Z (2018) Nonconvex projection activated zeroing neurodynamic models for time-varying matrix pseudoinversion with accelerated finite-time convergence. Appl Soft Comput 62:840–850
Xiao L, Liao B, Li S, Zhang Z, Ding L, Jin L (2018) Design and analysis of FTZNN applied to the real-time solution of a nonstationary Lyapunov equation and tracking control of a wheeled mobile manipulator. IEEE Trans Ind Inform 14(1):98–105
Xiao L, Liao B, Li S, Chen K (2018) Nonlinear recurrent neural networks for finite-time solution of general time-varying linear matrix equations. Neural Netw 98:102–113
Liu S, Trenkler G (2008) Hadamard, Khatri-Rao, Kronecker and other matrix products. Int J Coop Inf Syst 4(1):160–177
Mead C (1989) Analog VLSI and Neural Systems. Addison-Wesley, Boston, USA
Xiao L, Li S, Yang J, Zhang Z (2018) A new recurrent neural network with noise-tolerance and finite-time convergence for dynamic quadratic minimization. Neurocomputing 285:125–132
Lv X, Xiao L, Tan Z (2019) Improved Zhang neural network with finite-time convergence for time-varying linear system of equations solving. Inf Process Lett 147:88–93
Acknowledgements
The authors would like to thank the editors and anonymous reviewers for their valuable suggestions and constructive comments which have really helped the authors improve very much the presentation and quality of this paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This work is supported by the National Natural Science Foundation of China with number 61603143, the Quanzhou City Science and Technology Program of China with number 2018C111R, the Promotion Program for Young and Middle-aged Teacher in Science and Technology Research of Huaqiao University with number ZQN-YX402, and the Scientific Research Funds of Huaqiao University with number 15BS410.
Appendix: Procedure of ZNN Design
Appendix: Procedure of ZNN Design
In this appendix, the general procedure of the ZNN design [13, 15,16,17,18,19] for time-varying problems solving is presented.
Specifically, as to a time-varying problem \(F(t)=0\in R^{m\times n}\) to be solved, the following matrix/vector-valued error function \(E(t)\in R^{m\times n}\) is firstly defined:
Next, the time derivative \({\dot{E}}(t)\) of E(t) is selected such that E(t) can globally and exponentially converge to zero. More specifically, \({\dot{E}}(t)\) can be selected via the ZNN design formula [13, 15,16,17,18,19] as follows:
where \(\gamma >0\in R\) denotes the parameter that is used to scale the convergence rate of the solution, and \({\mathcal {F}}(\cdot ): R^{m\times n}\rightarrow R^{m\times n}\) denotes the monotonically-increasing odd activation-function array. Finally, by expanding the ZNN design formula (11), the differential equation of a ZNN model is thus established for solving a specifical time-varying problem.
Rights and permissions
About this article
Cite this article
Guo, D., Lin, X. Li-Function Activated Zhang Neural Network for Online Solution of Time-Varying Linear Matrix Inequality. Neural Process Lett 52, 713–726 (2020). https://doi.org/10.1007/s11063-020-10291-y
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-020-10291-y