1 Introduction

Recently, linear matrix inequality (LMI) has played a more and more important role in science and engineering areas [1,2,3,4,5,6]. The LMI-based approaches have been designed and used in many fields, such as stability analysis of fuzzy control system, robustness analysis of neural network, and gain scheduled controller design. The LMI has been now considered as a powerful formulation and design technique for solving different types of problems [1,2,3,4,5,6,7,8,9,10,11,12,13]. For example, Lyapunov matrix inequality is constructed to analyze the stability and performance of delayed dynamical systems [6,7,8,9]. As a powerful tool, the LMI-based technique can also be applicable for solving the system identification problems, the nonconvex multi-criterion optimal control problems, and the inverse problems of optimal control [9, 13]. Generally speaking, these problems are transformed into the related LMI problems, which are then solved by using neural-network models and/or numerical algorithms [7,8,9, 13]. Given its importance, an effective model/algorithm for solving a class of LMIs is worth developing and investigating.

To solve a class of LMIs (e.g., Lyapunov matrix inequalities and algebraic Riccati matrix inequalities), the gradient-based neural networks (which can be viewed as the common way) were developed by Lin et al. in [7] and [8]. In [9], a simplified neural network for a class of LMI problems solving was presented by Cheng et al. Besides, the dynamic system methods were provided by Hu [14] for solving mixed LMIs and linear vector inequalities and equalities. Note that the intrinsical design of these neural networks and methods is for time-invariant LMIs solving. As many systems in engineering applications are time-varying, the corresponding LMIs may also be time-varying ones (i.e., the coefficients always change as time t evolves). When the aforementioned neural networks and methods are exploited directly for time-varying LMIs solving, the time-derivative information of coefficients is not involved. They may be less effective and efficient due to the lack of consideration for such an important information [13, 15,16,17].

Since March 2001, a typical recurrent neural network termed Zhang neural network (ZNN) has been formally proposed by Zhang et al. for various time-varying problems solving [16, 18]. The general procedure of the ZNN design is presented in the Appendix. For better understanding, the concept of ZNN [13] is given as follows.

Concept Being a typical recurrent neural network, Zhang neural network (ZNN) has been developed and studied since 2001. It originates from the research of Hopfield-type neural networks and is a systematic approach for time-varying problems solving. Such a ZNN is different from the traditional gradient-based neural network(s) in terms of the problem to be solved, indefinite error function, exponent-type design formula, dynamic equation, and the utilization of time-derivative information.

By following the inspirational work of Zhang et al. [16, 18], two different ZNN models for time-varying LMI (TVLMI) solving were proposed and investigated in the authors’ previous work [13, 17]. Specifically, in [17], the first ZNN model was developed by exploiting the variant of the ZNN design formula. In [13], the second ZNN model was developed by means of the conversion from inequality to equality. For such two ZNN models, the former is depicted in an implicit dynamics, while the latter is depicted in an explicit dynamics. Theoretical and simulative results were also provided in [17] and [13] to further show the efficacy of the presented two ZNN models for TVLMI solving. By following the researches of [13] and [17], another ZNN model was developed by Sun et al. in [19] for solving time-varying Stein matrix inequality (which, to some extent, shows the significance of the ZNN research community). All of the results in [13, 17] and [19] also indicate that different choices of activation functions result in different convergence performances for a ZNN model. That is, the activation function affects greatly the ZNN convergence.

In [20], a special activation function termed Li activation function was designed by Li et al. to accelerate the ZNN model to finite-time convergence. The recent research on ZNN [21,22,23,24] has further revealed that the finite-time convergence of ZNN models can be achieved by exploiting a specifically-designed activation function. Motivated by the idea in [20], this paper presents a ZNN model with finite-time convergence on TVLMI solving by employing the Li activation function. Evidently, the convergence property of such a model is different from that of the existing ZNN model [13, 17, 19] using the previously-investigated activation functions (e.g., power-sigmoid and hyperbolic-sine activation functions). It is also worth pointing out here that at present the research on TVLMI solving is rare (though it can be an appealing topic), not to mention a computational model with finite-time convergence.

Being an extension of the successful work [13, 17, 19], this paper presents and investigates the Li-function activated ZNN (LFAZNN) model for online solution of TVLMI. Specifically, by defining an indefinite matrix-valued error function and by using the ZNN design formula, the ZNN model depicted in an explicit dynamics is developed for TVLMI solving. For such a ZNN model, employing Li activation function [20] results in the LFAZNN model. Then, it is theoretically proven that the presented LFAZNN model has the property of finite-time convergence. Simulation results with two illustrative examples further substantiate the efficacy of the presented LFAZNN model for TVLMI solving.

The rest of this paper is organized into four sections. Section 2 presents the formulations of the TVLMI problem and the ZNN model. In Sect. 3, the LFAZNN model is presented and investigated for TVLMI solving, together with theoretical results showing the computational performance of such an LFAZNN model. In Sect. 4, as synthesized by the presented LFAZNN model, simulation results are illustrated. Section 5 concludes this paper with final remarks. Before ending this section, it is worth summarizing and listing the main contributions of this paper as follows.

  • This paper presents the Li-function activated ZNN (LFAZNN) model for online solution of time-varying linear matrix inequality (TVLMI). Note that Li activation function is different from the activation functions investigated in [13, 17, 19].

  • This paper gives theoretical result to show the finite-time convergence of the presented LFAZNN model. This is the first time to provide the finite-time exact solution to TVLMI.

  • This paper illustrates comparative simulation results to further substantiate the efficacy of the presented LFAZNN model for TVLMI solving.

2 Time-Varying Linear Matrix Inequality

In this section, the formulation of the TVLMI problem is presented. Then, by defining an indefinite matrix-valued error function and by using the ZNN design formula [13, 15,16,17,18,19], the resultant ZNN model is developed for TVLMI solving.

Specifically, in this paper, the following TVLMI problem is considered:

$$\begin{aligned} A(t)X(t)\leqslant B(t), \end{aligned}$$
(1)

where \(A(t)\in R^{m\times m}\) and \(B(t)\in R^{m\times n}\) are smoothly time-varying matrices, and \(X(t)\in R^{m\times n}\) is the unknown matrix to be obtained. The objective of this paper is to find a feasible solution X(t) such that (1) holds true for any time instant \(t\geqslant 0\). Note that, in (1), the symbol “\(\leqslant \)”(between two matrices) is satisfied, if every element of one matrix is less than or equal to the corresponding element of the other matrix. Besides, to lay a basis for further discussion, A(t) is assumed to be nonsingular at any time instant \(t\in [0,+\infty )\) in this paper.

To solve the TVLMI (1), the indefinite matrix-valued error function is defined as follows:

$$\begin{aligned} E(t)=A(t)X(t)-B(t)+\varLambda ^{2}(t)\in R^{m\times n}, \end{aligned}$$
(2)

where \(\varLambda ^{2}(t)=\varLambda (t)\odot \varLambda (t)\) with the time-varying matrix \(\varLambda (t)\) being

$$\begin{aligned} \varLambda (t)=\begin{bmatrix} \lambda _{11}(t) &{}\quad \lambda _{12}(t) &{} \cdots &{} \lambda _{1n}(t)\\ \lambda _{21}(t) &{}\quad \lambda _{22}(t) &{} \cdots &{} \lambda _{2n}(t)\\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ \lambda _{m1}(t) &{}\quad \lambda _{m2}(t) &{} \cdots &{} \lambda _{mn}(t) \end{bmatrix}\in R^{m\times n}. \end{aligned}$$

Note that the multiplication operator \(\odot \) is the Hadamard product [25]. Thus, the time-varying matrix \(\varLambda ^{2}(t)\) in (2) is written as below:

$$\begin{aligned} \varLambda (t)=\begin{bmatrix} \lambda ^{2}_{11}(t) &{}\quad \lambda ^{2}_{12}(t) &{} \cdots &{} \lambda ^{2}_{1n}(t)\\ \lambda ^{2}_{21}(t) &{}\quad \lambda ^{2}_{22}(t) &{} \cdots &{} \lambda ^{2}_{2n}(t)\\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ \lambda ^{2}_{m1}(t) &{}\quad \lambda ^{2}_{m2}(t) &{} \cdots &{} \lambda ^{2}_{mn}(t) \end{bmatrix}\in R^{m\times n}, \end{aligned}$$

of which each element is greater than or equal to zero, i.e., \(\varLambda ^{2}(t)\geqslant 0\). Evidently, when \(E(t)=0\), the following result is obtained:

$$\begin{aligned} A(t)X(t)-B(t)=-\varLambda ^{2}(t)\leqslant 0. \end{aligned}$$

That is to say, solving the TVLMI (1) can be equivalent to solving the time-varying matrix equation \(A(t)X(t)-B(t)+\varLambda ^{2}(t)=0\).

Besides, for further discussion, the relationship between matrices \(\varLambda (t)\) and \(\varLambda ^{2}(t)\) is reformulated as

$$\begin{aligned} \text {vec}(\varLambda ^{2}(t))=D(t)\text {vec}(\varLambda (t)), \end{aligned}$$

where operator \(\text {vec}(\cdot )\in R^{mn}\) generates a column vector obtained by stacking all column vectors of a matrix [13, 15, 16]. In addition, the diagonal matrix D(t) is defined as follows:

$$\begin{aligned} D(t)=\begin{bmatrix} {\tilde{\lambda }}_{1}(t) &{}\quad 0 &{} \cdots &{} 0\\ 0 &{}\quad {\tilde{\lambda }}_{2}(t) &{} \cdots &{} 0\\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ 0 &{}\quad 0 &{} \cdots &{} {\tilde{\lambda }}_{n}(t) \end{bmatrix}\in R^{mn\times mn}, \end{aligned}$$

with the ith (with \(i=1,\ldots ,n\)) block matrix being

$$\begin{aligned} {\tilde{\lambda }}_{i}(t)=\begin{bmatrix} \lambda _{1i}(t) &{}\quad 0 &{} \cdots &{} 0\\ 0 &{}\quad \lambda _{2i}(t) &{} \cdots &{} 0\\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ 0 &{}\quad 0 &{} \cdots &{} \lambda _{mi}(t) \end{bmatrix}\in R^{m\times m}. \end{aligned}$$

By defining \(v(t)=\text {vec}(\varLambda (t))\in R^{mn}\), the time derivative of \(\text {vec}(\varLambda ^{2}(t))\) is formulated as

$$\begin{aligned} \frac{\text {d}(\text {vec}(\varLambda ^{2}(t)))}{\text {d}t}=2D(t){\dot{v}}(t), \end{aligned}$$

with \({\dot{v}}(t)\) being the time derivative of v(t). Furthermore, by defining \(u(t)=\text {vec}(X(t))\in R^{mn}\) and \(w(t)=\text {vec}(B(t))\in R^{mn}\), the error function (2) is reformulated as follows:

$$\begin{aligned} {\tilde{E}}(t)=M(t)u(t)-w(t)+D(t)v(t)\in R^{mn}, \end{aligned}$$
(3)

where \(M(t)=I\otimes A(t)\in R^{mn\times mn}\) with \(I\in R^{n\times n}\) being the identity matrix and \(\otimes \) denoting the Kronecker product [13, 15, 16].

To make the error function \({\tilde{E}}(t)\) converge to zero, the time derivative of \({\tilde{E}}(t)\), i.e., \(\dot{{\tilde{E}}}(t)\), is chosen via the following form (i.e., the so-called ZNN design formula) [13, 15,16,17,18,19]:

$$\begin{aligned} \dot{{\tilde{E}}}(t)=-\gamma {\mathcal {F}}({\tilde{E}}(t)), \end{aligned}$$
(4)

where design parameter \(\gamma >0\in R\), being the reciprocal of a capacitance parameter [26], is used to scale the convergence rate of the solution, and \({\mathcal {F}}(\cdot )\) denotes the activation-function array. Note that, in general, the value of \(\gamma \) should be set as large as the hardware system would permit, or selected appropriately for simulation purposes [26]. In addition, function \(f(\cdot )\), being a processing element of \({\mathcal {F}}(\cdot )\), can be any monotonically increasing odd activation function, e.g., the linear, power-sigmoid, power-sum and hyperbolic-sine activation functions [13, 15,16,17,18,19].

Based on (3) and (4), the following dynamic equation of a ZNN model is established:

$$\begin{aligned} M(t){\dot{u}}(t)+2D(t){\dot{v}}(t)=-{\dot{M}}(t)u(t)+{\dot{w}}(t)-\gamma {\mathcal {F}}(M(t)u(t)-w(t)+D(t)v(t)), \end{aligned}$$
(5)

where \({\dot{u}}(t)\), \({\dot{M}}(t)\) and \({\dot{w}}(t)\) are the time derivatives of u(t), M(t) and w(t), respectively. As for (5), it is reformulated as

$$\begin{aligned} \begin{bmatrix} M(t)~~2D(t) \end{bmatrix} \begin{bmatrix} {\dot{u}}(t)\\ {\dot{v}}(t) \end{bmatrix}=\begin{bmatrix} -{\dot{M}}(t)~~0 \end{bmatrix} \begin{bmatrix} u(t)\\ v(t) \end{bmatrix} +{\dot{w}}(t)-\gamma {\mathcal {F}}(\begin{bmatrix} M(t)~~D(t) \end{bmatrix}\begin{bmatrix} u(t)\\ v(t) \end{bmatrix}-w(t)). \end{aligned}$$
(6)

Thus, by defining the augmented vector \(y(t)=[u^{\text {T}}(t),v^{\text {T}}(t)]^{\text {T}}\in R^{2mn}\), (6) is further rewritten as follows:

$$\begin{aligned} W(t){\dot{y}}(t)=Q(t)y(t)+{\dot{w}}(t)-\gamma {\mathcal {F}}(P(t)y(t)-w(t)), \end{aligned}$$
(7)

where \({\dot{y}}(t)\) is the time derivative of y(t). In addition, the augmented matrices \(W(t)\in R^{mn\times 2mn}\), \(Q(t)\in R^{mn\times 2mn}\) and \(P(t)\in R^{mn\times 2mn}\) are defined respectively as below:

$$\begin{aligned} \begin{aligned} W(t)=\begin{bmatrix} M^\text {T}(t)\\ 2D^\text {T}(t) \end{bmatrix}^\text {T},~ Q(t)=\begin{bmatrix} -{\dot{M}}^\text {T}(t) \\ 0 \end{bmatrix}^\text {T}~\text {and}~ P(t)=\begin{bmatrix} M^\text {T}(t)\\ D^\text {T}(t) \end{bmatrix}^\text {T}. \end{aligned} \end{aligned}$$

To make (7) more computable, we can reformulate (7) as the following explicit form:

$$\begin{aligned} {\dot{y}}(t)=W^{\dag }(t)Q(t)y(t)+W^{\dag }(t){\dot{w}}(t)-\gamma W^{\dag }(t){\mathcal {F}}(P(t)y(t)-w(t)), \end{aligned}$$
(8)

where \(W^{\dag }(t)=W^{\text {T}}(t)(W(t)W^{\text {T}}(t))^{-1}\in R^{2mn\times mn}\). Therefore, based on the error function (2), the ZNN model (8) is obtained for TVLMI solving. For (8), the following theoretical result is provided, of which the proof can be generalized from [13, 17,18,19].

Theorem 1

Given a smoothly time-varying nonsingular coefficient matrix \(A(t)\in R^{m\times m}\) and a smoothly time-varying coefficient matrix \(B(t)\in R^{m\times n}\) in (1), if a monotonically increasing odd activation function array \({\mathcal {F}}(\cdot )\) is used, then ZNN model (8) generates an exact time-varying solution of the TVLMI (1).

3 Li-Function Activated ZNN Model

In this section, by exploiting Li activation function in [20], the Li-function activated ZNN (LFAZNN) model is developed and investigated for TVLMI solving.

Specifically, Li activation function originated from [20] is presented as follows:

$$\begin{aligned} \psi ({\tilde{e}}_{i})=\frac{1}{2}\text {Lip}^{\tau }({\tilde{e}}_{i})+\frac{1}{2}\text {Lip}^{1/\tau }({\tilde{e}}_{i}), \end{aligned}$$

where \({\tilde{e}}_{i}\) is the ith element of \({\tilde{E}}(t)\) depicted in (3) and \(\tau \in (0,1)\). In addition, \(\text {Lip}^{\tau }(\cdot )\) is defined as

$$\begin{aligned} \text {Lip}^{\tau }({\tilde{e}}_{i})= {\left\{ \begin{array}{ll} |{\tilde{e}}_{i}|^{\tau },~&{}{\text {if}}~~{\tilde{e}}_{i}>0,\\ 0,~&{}{\text {if}}~~{\tilde{e}}_{i}=0,\\ -|{\tilde{e}}_{i}|^{\tau },~&{}{\text {if}}~~{\tilde{e}}_{i}<0, \end{array}\right. } \end{aligned}$$

with symbol \(|\cdot |\) denoting the absolute value of a scalar.

Thus, by exploiting Li activation function array \(\varPsi (\cdot )\) [with \(\psi (\cdot )\) being its element], the following LFAZNN model for TVLMI solving is established:

$$\begin{aligned} {\dot{y}}(t)=W^{\dag }(t)Q(t)y(t)+W^{\dag }(t){\dot{w}}(t)-\gamma W^{\dag }(t)\varPsi (P(t)y(t)-w(t)). \end{aligned}$$
(9)

As for such an LFARNN model (9), the theoretical result on the general convergence property is presented as follows, which can be generalized from Theorem 1.

Corollary

Given a smoothly time-varying nonsingular coefficient matrix \(A(t)\in R^{m\times m}\) and a smoothly time-varying coefficient matrix \(B(t)\in R^{m\times n}\) in (1), the presented LFAZNN model (9) generates an exact time-varying solution of the TVLMI (1).

Fig. 1
figure 1

Structure of the neurons in the presented LFAZNN model (9) for TVLMI solving

As to the presented LFAZNN model (9), its the neural-network structure is given in Fig. 1. In such a figure, the ith (with \(i=1,2,\ldots ,2mn\)) neuron of (9) is written as follows:

$$\begin{aligned} y_{i}=\int \sum _{k=1}^{\mu }\hat{w}_{ik}\Big (\sum _{j=1}^{\nu }q_{kj}y_{j}+{\dot{w}}_{k}-\gamma \psi \Big (\sum _{j=1}^{\nu }p_{kj}y_{j}-w_{k}\Big )\Big )\text {d}t, \end{aligned}$$

where \(\nu =2\mu =2mn\). \(\hat{w}_{ik}\), \(q_{kj}\), and \(p_{kj}\) denote time-varying weights that correspond to the ikth element of \(W^{\dag }(t)\), the kjth element of Q(t), and kjth element of P(t). \({\dot{w}}_{k}\) and \(w_{k}\) denote time-varying thresholds that correspond to the kth elements of \({\dot{w}}(t)\) and w(t), respectively. As shown in Fig. 1, there can be there layers for the structure of the presented LFAZNN model (9). The ith output of the first layer is \(\sum _{j=1}^{\nu }q_{kj}y_{j}+{\dot{w}}_{k}-\gamma \psi (\sum _{j=1}^{\nu }p_{kj}y_{j}-w_{k})\), whereas the ith output of the second layer is \(\sum _{k=1}^{\mu }\hat{w}_{ik}(\sum _{j=1}^{\nu }q_{kj}y_{j}+{\dot{w}}_{k}-\gamma \psi (\sum _{j=1}^{\nu }\)\(p_{kj}y_{j}-w_{k}))\), i.e., \({\dot{y}}_{i}\) (the time-derivative of \(y_{i}\)). The third layer possesses the integral action, and thus the ith output of such a layer is \({y}_{i}\). Given an initial state, the state of \(\{y_{1},y_{2},\ldots ,y_{\nu }\}\) in Fig. 1 evolves as time elapses, and finally reaches an equilibrium state. At this time, the output of the third layer is the solution of the TVLMI (1), showing that the presented LFAZNN model (9) can generate an time-varying solution of (1).

Besides, as for the presented LFAZNN model (9), it has the property of finite-time convergence for TVLMI solving.

Theorem 2

Given a smoothly time-varying nonsingular coefficient matrix \(A(t)\in R^{m\times m}\) and a smoothly time-varying coefficient matrix \(B(t)\in R^{m\times n}\) in (1), the error function E(t) of the presented LFAZNN model (9) can converge to zero within finite time, which corresponds to the finite-time convergence of X(t) [whose vectorization form is the first part of the state vector y(t) of (9)] to a time-varying theoretical solution of TVLMI (1).

Proof

As for the presented LFAZNN model (9), it can be reformulated as follows:

$$\begin{aligned} \dot{{\tilde{E}}}(t)=-\gamma \varPsi ({\tilde{E}}(t)), \end{aligned}$$

where \({\tilde{E}}(t)\) is depicted in (3). Be defining \({\tilde{e}}_{i}(t)\) as the ith element of \({\tilde{E}}(t)\) (with \(i=1,2,\ldots ,mn\)), we have the following decoupled differential equation:

$$\begin{aligned} \dot{{\tilde{e}}}_{i}(t)=-\gamma \psi ({\tilde{e}}_{i}(t)), \end{aligned}$$

where \(\dot{{\tilde{e}}}_{i}(t)\) denotes the time derivative of \({\tilde{e}}_{i}(t)\).

For further discussion, \({\tilde{E}}(0)=M(0)u(0)-w(0)+D(0)v(0)\) is denoted as the initial value of \({\tilde{E}}(t)\) in (3). In addition, \({\tilde{e}}^+(t)\) is denoted as the element in \({\tilde{E}}(t)\) with the largest initial value \({\tilde{e}}^+(0)=\max \{{\tilde{E}}(0)\}\), and \({\tilde{e}}^-(t)\) is denoted as the element in \({\tilde{E}}(t)\) with the smallest initial value \({\tilde{e}}^-(0)=\min \{{\tilde{E}}(0)\}\).

Based on the comparison lemma [20], since \({\tilde{e}}^+(0)=\max \{{\tilde{E}}(0)\}\geqslant {\tilde{e}}_{i}(0)\) with \(i=1,2,\cdots ,mn\), then we have \({\tilde{e}}^+(t)\geqslant {\tilde{e}}_{i}(t)\) with \(t\geqslant 0\). Similarly, we have \({\tilde{e}}^-(t)\leqslant {\tilde{e}}_{i}(t)\) with \(t\geqslant 0\). Thus, for all possible i, the following result is obtained, as time t goes on:

$$\begin{aligned} {\tilde{e}}^-(t)\leqslant {\tilde{e}}_{i}(t)\leqslant {\tilde{e}}^+(t), \end{aligned}$$

which means that \({\tilde{e}}_{i}(t)\) converges to zero when both \({\tilde{e}}^+(t)\) and \({\tilde{e}}^-(t)\) reach zero. That is to say, for the presented LFAZNN model (9), its convergence time \(t_\text {c}\) is bounded by the larger one between the dynamics of \({\tilde{e}}^+(t)\) and that of \({\tilde{e}}^-(t)\). In mathematics, \(t_\text {c}\leqslant \max \{t_\text {c}^+,t_\text {c}^-\}\), where \(t_\text {c}^+\) and \(t_\text {c}^-\) denote the convergence time corresponding to \({\tilde{e}}^+(t)\) and \({\tilde{e}}^-(t)\).

Evidently, \(t_\text {c}^+\) and \(t_\text {c}^-\) should be calculated first so as to obtain \(t_\text {c}\). To calculate \(t_\text {c}^+\), we have

$$\begin{aligned} \dot{{\tilde{e}}}^+(t)=-\gamma \psi ({\tilde{e}}^+(t))~\text {with}~{\tilde{e}}^+(0)=\max \{{\tilde{E}}(0)\}. \end{aligned}$$
(10)

Defining a Lyapunov function candidate \(L(t)=|{\tilde{e}}^+(t)|^2\), we have its time derivative as

$$\begin{aligned} \begin{aligned} {\dot{L}}(t)&=-2\gamma {\tilde{e}}^+(t)\psi ({\tilde{e}}^+(t))=-\gamma \left( |{\tilde{e}}^+(t)|^{\tau +1}+|{\tilde{e}}^+(t)|^{\frac{1}{\tau }+1}\right) \\&\leqslant -\gamma |{\tilde{e}}^+(t)|^{\tau +1}=-\gamma L^{\frac{\tau +1}{2}}(t). \end{aligned} \end{aligned}$$

By solving \({\dot{L}}(t)\leqslant -\gamma L^{\frac{\tau +1}{2}}(t)\) with the initial condition \(L(0)=|{\tilde{e}}^+(0)|^2\), the following result is obtained:

$$\begin{aligned} L^{\frac{1-\tau }{2}}(t) {\left\{ \begin{array}{ll} \leqslant |{\tilde{e}}^+(0)|^{1-\tau }-\frac{\gamma t(1-\tau )}{2}, &{} \text {if}~t\leqslant \frac{2|{\tilde{e}}^+(0)|^{1-\tau }}{\gamma (1-\tau )}, \\ =0, &{} \text {if}~ t>\frac{2|{\tilde{e}}^+(0)|^{1-\tau }}{\gamma (1-\tau )}. \end{array}\right. } \end{aligned}$$

This implies that L(t) converges to zero at most after a time period \({2|{\tilde{e}}^+(0)|^{1-\tau }}/{\gamma (1-\tau )}\). Therefore, \({\tilde{e}}^+(t)\) reaches zero for \(t>{2|{\tilde{e}}^+(0)|^{1-\tau }}/{\gamma (1-\tau )}\), i.e., the convergence time for \({\tilde{e}}^+(t)\) is \(t_\text {c}^+<{2|{\tilde{e}}^+(0)|^{1-\tau }}/{\gamma (1-\tau )}\). By following the similar analysis, we have \(t_\text {c}^-<{2|{\tilde{e}}^-(0)|^{1-\tau }}/{\gamma (1-\tau )}\). Thus, the convergence time for \({\tilde{E}}(t)\) is obtained as follows:

$$\begin{aligned} t_\text {c}<\max \left\{ \frac{2|{\tilde{e}}^-(0)|^{1-\tau }}{\gamma (1-\tau )},\frac{2|{\tilde{e}}^+(0)|^{1-\tau }}{\gamma (1-\tau )}\right\} . \end{aligned}$$

Note that \({\tilde{E}}(t)\) is the vectorization form of the error function E(t) depicted in (2). By summarizing the above analysis, the error function E(t) of the presented LFAZNN model (9) can converge to zero within finite time \(t_\text {c}\), which corresponds to the finite-time convergence of X(t) [whose vectorization form is the first part of the state vector y(t) of (9)] to a time-varying theoretical solution of TVLMI (1). The proof in thus completed. \(\square \)

4 Simulative Verifications

In the previous section, the LFAZNN model (9) and its finite-time convergence property have been presented and analyzed to solve TVLMI (1). In this section, simulation results with two illustrative examples are provided to substantiate the efficacy of the presented LFAZNN model (9) for TVLMI solving.

Example 1

In the first example, let us consider TVLMI (1) with smoothly time-varying coefficient matrices A(t) and B(t) being as follows:

$$\begin{aligned} A(t)= \begin{bmatrix} \sin (t) &{}\quad \cos (t)\\ -\cos (t) &{}\quad \sin (t) \end{bmatrix}\in R^{2\times 2}~~\text {and}~~ B(t)= \begin{bmatrix} \cos (2t) &{}\quad \sin (2t)+1\\ -\sin (2t)+1 &{}\quad -\cos (2t) \end{bmatrix}\in R^{2\times 2}. \end{aligned}$$

The simulation results synthesized by the presented LFAZNN model (9) are illustrated in Figs. 2, 3, 4 and 5.

Fig. 2
figure 2

Neural states of the presented LFAZNN model (9) with \(\tau =0.2\) and \(\gamma =1\) for TVLMI (1) solving

Fig. 3
figure 3

Profiles of residual error \(\Vert P(t)y(t)-w(t)\Vert _{2}\) and testing error function \(\varepsilon (t)=A(t)X(t)-B(t)\) synthesized by the presented LFAZNN model (9) with \(\tau =0.2\) and \(\gamma =1\) for TVLMI (1) solving

Fig. 4
figure 4

Simulation results synthesized by the presented LFAZNN model (9) with \(\tau =0.2\) and \(\gamma =10\) for TVLMI (1) solving

Specifically, Fig. 2 shows the neural states of the presented LFAZNN model (9) with \(\tau =0.2\) and \(\gamma =1\). As seen from Fig. 2, the trajectories of X(t) [being the first four elements of y(t) in (9)] and \(\varLambda (t)\) [being the rest elements of y(t)], starting from five randomly-generated initial states, are time-varying. In addition, Fig. 3a shows the characteristics of residual error \(\Vert P(t)y(t)-w(t)\Vert _{2}=\Vert A(t)X(t)-B(t)+\varLambda ^{2}(t)\Vert _{\text {F}}\) (with \(\Vert \cdot \Vert _{\text {2}}\) denoting the two norm of a vector and \(\Vert \cdot \Vert _{\text {F}}\) denoting the Frobenius norm of a matrix), from which we can observe that the residual errors of the presented LFAZNN model (9) are all convergent to zero. This means that the solutions of X(t) and \(\varLambda (t)\) shown in Fig. 2 are the time-varying solutions to \(A(t)X(t)-B(t)+\varLambda ^{2}(t)=0\). Given that \(-\varLambda ^{2}(t)\leqslant 0\), \(A(t)X(t)=B(t)-\varLambda ^{2}(t)\leqslant B(t)\), showing that the X(t) solution is an exact time-varying solution of TVLMI (1). For better understanding, Fig. 3b illustrates the profiles of the testing error function \(\varepsilon (t)=A(t)X(t)-B(t)\). As shown in Fig. 3b, all the elements of \(\varepsilon (t)\) are less than or equal to zero, i.e., \(A(t)X(t)-B(t)\leqslant 0\). Evidently, these simulation results substantiate the efficacy of the presented LFAZNN model (9) for TVLMI solving. Besides, it is worth pointing out here that, as shown in Fig. 3a, instead of converging exponentially to zero, the residual errors of (9) decrease directly to zero within finite time. This result of finite-time convergence is quite from the convergence results in the previous work [13, 17, 19].

By increasing the value of \(\gamma \) (i.e., from 1 to 10), the presented LFAZNN model (9) is simulated and the related results are shown in Fig. 4. As observed from such a figure [especially, Fig. 4b], the problem of TVLMI (1) is solved effectively via (9) in terms of residual errors converging to zero rapidly. This statement that the X(t) trajectories in Fig. 4a satisfy \(A(t)X(t)\leqslant B(t)\); i.e., they are the solutions to TVLMI (1). These results substantiate again the efficacy of the presented LFAZNN model (9) for TVLMI solving. Furthermore, a comparison of Figs. 3a and 4b shows that the convergence performance of (9) is improved, as the value of \(\gamma \) increases. This result indicates that superior computational performance of the presented LFAZNN model (9) (including its finite-time convergence) can be achieved by increasing \(\gamma \).

Fig. 5
figure 5

Profiles of residual error \(\Vert P(t)y(t)-w(t)\Vert _{2}\) synthesized by the ZNN model (8) using different types of activation functions and the LFAZNN model (9) with different \(\gamma \) values for TVLMI (1) solving

Besides, for comparison and to provide an intuitive result on finite-time convergence, the ZNN model (8) using different types of activation functions (i.e., the linear, hyperbolic-sine, power-sigmoid, power-sum, and Li activation functions [13, 15,16,17,18,19,20]) are simulated to solve TVLMI (1). The corresponding results are shown in Fig. 5, where \(\tau =0.2\) for the LFAZNN model (9). Evidently, these results indicate that the TVLMI (1) is solved successfully via (8); i.e., the residual errors are all convergent to zero. In addition, the following results are observed and summarized from Fig. 5.

  • The convergence performance of the ZNN model (8) [including the LFAZNN model (9)] is improved effectively by increasing the value of \(\gamma \). Thus, the important role of \(\gamma \) is confirmed again.

  • The ZNN model (8) using the hyperbolic-sine, power-sigmoid and power-sum activation functions possesses better convergence than the one using the linear activation function.

  • The LFAZNN model (9) possesses a finite-time convergence, i.e., the corresponding residual error decreases directly to zero within finite time instead of converging to zero when \(t\rightarrow \infty \).

Based on these results, the LFAZNN model (9) is advantageous over the ZNN model (8) using the previously-investigated activation functions [13, 15,16,17,18,19] because it has the property of finite-time convergence on TVLMI solving. Notably, using Li activation function, the ZNN model for solving other time-varying problems [21,22,23,24] still processes the finite-time convergence. In addition, based on this activation function, more activation functions can be constructed to further improve the ZNN performance [27, 28]. This can be viewed as the generalizability of Li activation function when comparing to those in [13, 15,16,17,18,19].

In summary, the above simulation results (i.e., Figs. 2, 3, 4 and 5) have substantiated the efficacy of the presented LFAZNN model (9) for online solution of TVLMI (1), and more importantly, its property of finite-time convergence (in terms of the residual error decreasing directly to zero within finite time).

Fig. 6
figure 6

Simulation results synthesized by the presented LFAZNN model (9) with \(\tau =0.4\) and \(\gamma =5\) for TVLMI (1) solving

Example 2

In the second example, we investigate how the value of parameter \(\tau \) affects the convergence performance of the presented LFAZNN model (9). The time-varying coefficient matrices A(t) and B(t) of TVLMI (1) in this example are designed as follows:

$$\begin{aligned} A(t)= & {} \begin{bmatrix} 3+\sin (3t) &{}\quad \cos (3t)/2 &{}\quad \cos (3t)\\ \cos (3t)/2 &{}\quad 3+\sin (3t) &{}\quad \cos (3t)/2\\ \cos (3t) &{}\quad \cos (3t)/2 &{}\quad 3+\sin (3t) \end{bmatrix}\in R^{3\times 3},\\ B(t)= & {} \begin{bmatrix} \sin (3t) &{}\quad \cos (3t)\\ \cos (4t)+1 &{}\quad \sin (4t)+1\\ \sin (5t)+\cos (5t) &{}\quad \sin (5t)\cos (5t) \end{bmatrix}\in R^{3\times 2}. \end{aligned}$$

The corresponding simulation results are illustrated in Figs. 6 and 7.

Specifically, as synthesized by the presented LFAZNN model (9) with \(\tau =0.4\) and \(\gamma =5\), Fig. 6a shows the trajectories of X(t) [being the first six elements of y(t) in (9)] that are time-varying. Figure 6b shows the characteristics of residual error \(\Vert P(t)y(t)-w(t)\Vert _{2}=\Vert A(t)X(t)-B(t)+\varLambda ^{2}(t)\Vert _{\text {F}}\), which indicates that the residual errors of (9) decrease directly to zero within finite time. These results substantiate the efficacy (as well as the finite-time convergence property) of the presented LFAZNN model (9) for TVLMI solving.

By changing the value of \(\tau \) (i.e., \(\tau =0.6\) and \(\tau =0.8\)), the presented LFAZNN model (9) is simulated to solve the aforementioned TVLMI. The related results are shown in Fig. 7, which indicate that (9) is effective on solving the TVLMI (1) in terms of the residual errors being convergent to zero rapidly and within finite time. The finite-time convergence property of (9) for TVLMI solving is thus substantiated once again. Figs. 6b and 7 are also compared, and the finding shows that the convergence times of (9) in these figures are about 0.7277 s, 0.8778 s, and 1.5071 s, respectively. This scenario suggests that decreasing the value of \(\tau \) can enhance the finite-time convergence performance of the presented LFAZNN model (9). Therefore, \(\tau \) in (9) should be set appropriately small [20] to ensure the residual error of (9) converging to zero in a short period of time.

Fig. 7
figure 7

Profiles of residual error \(\Vert P(t)y(t)-w(t)\Vert _{2}\) synthesized by the presented LFAZNN model (9) with \(\gamma =5\) and different values of \(\tau \) for TVLMI (1) solving

Remark

It follows from Figs. 3, 4, 5, 6 and 7 that the convergence time of the presented LFAZNN model (9) can be expedited by increasing the value of \(\gamma \), and can be further shorten by decreasing the value of \(\tau \). Thus, to achieve superior convergence for (9), parameter \(\gamma \) should be set appropriately large, while parameter \(\tau \) should be set appropriately small.

5 Conclusions

In this paper, by exploiting Li activation function, the LFAZNN model (9) has been presented and investigated for online solution of TVLMI (1). Then, theoretical results have been provided to show the excellent computational performance of the presented LFAZNN model (9). That is, such an LFAZNN model has the property of finite-time convergence on solving TVLMI. Comparative simulation results with two illustrative examples have been illustrated to further substantiate the efficacy of the presented LFAZNN model (9) for TVLMI solving, as well as its finite-time convergence property (in terms of the residual error decreasing directly to zero within finite time).