Abstract
Time-varying nonlinear optimization problems with different noises often arise in the fields of scientific and engineering research. Noises are unavoidable in the practical workspace, but the most existing models for time-varying nonlinear optimization problems carry out with one assume that the computing process is free of noises. In this paper, from a control-theoretical framework, noise-suppressing zeroing neural dynamic (NSZND) model is developed, analyzed and investigated by feat of continuous-time zeroing neural network model, which behaves efficiently for hurdling online time-varying nonlinear optimization problems with the presence of different noises. Further, for speeding the rate of convergence, general noise-suppressing zeroing neural network (GNSZNN) model with different activation functions is discussed. Then, theoretical analyses show that the proposed noise-suppressing zeroing neural network model derived from NSZND model has the global convergence property in the presence of different kinds of noises. Besides, how GNSZNN model performs with different activation functions is also proved in detail. In addition, numerical results are provided to substantiate the feasibility and superiority of GNSZNN model for online time-varying nonlinear optimization problems with inherent tolerance to noises.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Being an important branch of optimization problems, the time-varying nonlinear optimization problem is widely encountered in scientific computing and engineering applications, for example, quadratic program [1], matrix square root estimation [2], convex optimization [3], nonlinear programming [4], bipedal walking robots [5, 6], and manipulator [7, 8]. Owing to its fundamental roles, more and more algorithms have been developed and verified to compute nonlinear optimization problems [9,10,11]. Furthermore, due to the simplicity of its iterative framework and the low memory requirement, the nonlinear conjugate gradient method is considered as an important optimization algorithm in the case of solving large-scale optimization problems in scientific and engineering computation [4, 5, 9,10,11]. In [11], according to a new secant condition, Dai and Liao developed and investigated a novel conjugate gradient method for large-scale minimization problem, which did not necessarily construct a descent search direction. In [4], Abubakar and Kumam presented a descent conjugate gradient method based on Dai-Liao descent condition and hyperplane projection method for solving the system of nonlinear equations. By virtue of a subspace minimization technique, Andrei proposed a three-term conjugate gradient method for large-scale optimization, in which the search direction satisfied the Dai-Liao conjugacy condition [12]. Moreover, two modified three-term conjugate gradient methods were developed which satisfied both the descent condition and the Dai-Liao conjugacy condition for unconstrained optimization [13]. According to the Dai-Liao’s idea, Sun et al. [14] developed, analyzed and investigated two spectral conjugate gradient methods which satisfied sufficient descent property for unconstrained optimization problems. In fact, these methods could be considered as modifications of the modified Broyden–Fletcher–Goldfarb–Shanno (MBFGS) method with different parameters. Generally speaking, given that the optimization problem does not change during the computational time, the traditional numerical algorithms have been developed, analyzed and investigated for solving static nonlinear optimization problems. Therefore, the computed solutions are directly utilized to the nonlinear optimization problem after the calculation. However, they might not be feasible and effective in handling time-varying nonlinear optimization problems. In addition, the time-varying nonlinear optimization problem is different from the static one as the former changes with the time. Indeed, the objective function of time-varying nonlinear optimization relates to time t that is a unidirectional uniform stream parameter. Thereby, the time derivative information is vital for obtaining the time-varying optimization solution for the time-varying nonlinear optimization problem.
Recently, owing to the merits like self-adaptation, distributed storage, and parallel processing schemes, the neural dynamics have been generalized for large-scale online nonlinear optimization problems [15,16,17]. For instance, a generalized repetitive motion planning scheme was presented with the limit of position error being thoroughly analyzed [18]. A novel dynamical method based on gradient dynamics was usually exploited through defining an ordinary differential equation [19]. Therefore, the solution of the optimization problem could be seen as a stable equilibrium point of the ordinary differential equation system. Moreover, as a powerful tool for real-time varying nonlinear optimization problem, zeroing neural network model was firstly proposed with superior feasibility and effectiveness [16, 20]. Furthermore, in order to realize potential digital hardware applications, Jin and Zhang [21] firstly proposed, investigated and developed continuous/discrete-time zeroing neural network model and two gradient dynamics models for online time-varying nonlinear optimization. For instance, taking the basis of zeroing neural networks, one discrete-time neural dynamics model able to online solve the time-dependent nonlinear optimization problem in complex-valued form was proposed with global convergence characteristic and applied in robotics and filters [22]. To eliminate the explicit matrix-inversion computation, a quasi-Newton–Broyden–Fletcher–Goldfarb–Shanno (BFGS) method was generalized and analyzed, which could be considered as an effectively approximate method for the inverse of Hessian matrix [23]. Subsequently, in [8, 24], a neural dynamic distributed scheme based on the zeroing neural network was proposed and developed to realize the cooperative control of multiple redundant manipulators with exponentially convergent position errors. Furthermore, the robustness performances of the zeroing neural network model with linear activation functions and power-sigmoid activation functions were verified for online time-varying problems. In [25], Zhang et al. proposed a novel finite-time varying-parameter convergent-differential neural network with finite-time exponential convergence and strong robustness for solving nonlinear and nonconvex optimization problems. In [26], due to the monotonicity of linear variational inequality, Huang et al. developed a new projection neural network for solving linear variational inequality problems and nonlinear optimization problems. In addition, for solving a complex-valued nonlinear convex programming problem, a complex-valued neural dynamical method was investigated, which was globally stable and convergent to the optimal solution [27]. Although previous neural network models have considered the time derivative information of the problem to be computed, noises were still not explicitly taken into account. Note that noises are unavoidable in the implementation of the practical system. Time-varying errors always exist in external disturbances or parameter disturbance which can be considered as measurement noises. Therefore, it is worth exploiting a kind of model for time-varying nonlinear optimization problems which is inherently tolerant to measurement noises.
The rest of this paper is organized into the following five sections. In Sect. 2, NSZND model and NSZNN model are described with the basic definitions with existing classical ZNN model being revisited. Furthermore, from the viewpoint of control, a control-theoretic framework is firstly designed and developed for time-varying nonlinear optimization problems. Theoretical analyses of Sect. 3 show that apart from the time-varying vector-form error of NSZND model, which has an exponential convergence performance, the vector-form time-varying state variable of NSZNN model converges to their optimal solutions with the presence of noises. In Sect. 4, to further investigate how three types of monotonically increasing odd activation functions speed the convergence, general noise-suppressing zeroing neural network (GNSZNN) model is analyzed in this section. In addition, the convergence performance of GNSZNN model is verified though a classical Lyapunov theorem. Section 5 provides numerical experiments belonging to the time-varying nonlinear optimization, conducted by the proposed GNSZNN model as well as other existing models comparatively. In Sect. 6, the conclusion and future works of this paper are presented. Last, before ending this introductory section, the main contributions of this paper can be obtained as follows:
-
1.
The continuous-time ZNN model for the online time-varying nonlinear optimization problem is redesigned from a control framework. Moreover, NSZNN model is firstly proposed, analyzed and verified for the solution of time-varying nonlinear optimization problem with inherent tolerance to noises. Furthermore, the time-varying vector-form state variable of NSZNN model can globally/exponentially converges to the time-varying optimization solution of the time-varying nonlinear optimization problem.
-
2.
Comparisons among the classical ZNN model, the gradient neural network model and the GNSZNN model for solving time-varying nonlinear optimization problems are given, which shows the efficacy and superior performance of the GNSZNN model with inherent tolerance to noises.
2 Problem formulation and ZNN solution
In order to lay a basis for further investigation, the problem formulation and the design procedures of the continuous-time ZNN model are provided for solving the online time-varying nonlinear optimization problem in this section.
2.1 Problem formulation and definitions
In this subsection, considering the time-varying nonlinear optimization problem, which is generalized as follows:
where \(f(\cdot ,\cdot ):{\mathbb {R}}^n\times [0,+\infty )\rightarrow {\mathbb {R}}\) means a second-order differentiable and time-varying nonlinear mapping function and \({\mathbf {x}}(t)\in {\mathbb {R}}^n\) denotes a time-varying vector in real time t. The objective of this paper is to solve an unknown time-varying optimization solution \({\mathbf {x}}(t)\in {\mathbb {R}}^n\) in real time t, which achieves the minimum value of time-varying nonlinear optimization problem (1) at each time instant. Assume that there always exists a time-varying optimization solution \({\mathbf {x}}^*(t)\in {\mathbb {R}}^n\) at any time interval \(t\in [t_0, t_f)\in [0, +\infty )\).
To obtain the online solution of time-varying nonlinear optimization problem (1), the following definitions are described as follows [21], provided that \(f(\cdot ,\cdot ):{\mathbb {R}}^n\times [0,+\infty )\rightarrow {\mathbb {R}}\) is a second-order differentiable and time-varying nonlinear mapping function.
Definition 1
The gradient of \(f(\cdot ,\cdot )\) is defined as
where \(\left( \frac{\partial f}{\partial x_1},\frac{\partial f}{\partial x_2},\cdots ,\frac{\partial f}{\partial x_n}\right) ^{\mathrm{\top }}=\left( g_1({\mathbf {x}}(t),t),g_2({\mathbf {x}}(t),t),\cdots ,g_n({\mathbf {x}}(t),t)\right) ^\mathrm{\top }\in {\mathbb {R}}^{n}\) and the superscript \(^{\mathrm{{\top }}}\) denotes the transpose operator of a vector or matrix.
Definition 2
The time-varying set is defined as
for time instant \(t\in [0,+\infty )\).
Definition 3
The vector-valued error function is defined as
i.e.,
where \(e_j(t)=g_j({\mathbf {x}}(t),t)\) is the jth element of \({{\mathbf {e}}}(t)\), \(\forall j\in \{1,2,\cdots ,n\}\). To obtain the time-varying optimization solution \({\mathbf {x}}^*(t)\) of time-varying nonlinear optimization problem (1), \({{\mathbf {g}}}({\mathbf {x}}(t),t)\) should be zero.
2.2 ZNN model and gradient neural network model
The ZNN model developed in this subsection combines the error information and the time derivative information, thereby obtaining strong robustness and high accuracy for the online solution of time-varying nonlinear optimization problem (1). To monitor, control and solve the online solution of time-varying nonlinear optimization problem (1) via zeroing \({{\mathbf {g}}}({\mathbf {x}}(t),t)\), on account of Zhang et al.’s design approach [28], we define the zeroing dynamic as follows:
i.e.,
with design parameter \(\gamma >0\). \({\dot{{\mathbf {e}}}}(t)\) denotes the time derivative of vector-valued error function and \(\dot{{{\mathbf {g}}}}_t({\mathbf {x}}(t),t)\) means the time derivative of gradient function. In fact, if the vector-valued error function \({{\mathbf {e}}}(t)\) approaches zero, the time-varying solution \({\mathbf {x}}(t)\) converges to the time-varying optimization solution \({\mathbf {x}}^*(t)\). Expanding zeroing dynamic (6) obtains the following differential equation
where \(H({\mathbf {x}}(t),t)\) is a Hessian matrix and \(\dot{{\mathbf {g}}}_t({\mathbf {x}}(t),t)\) is a time derivative vector, respectively. The details can be seen as
and
Due to the nonsingular property of Hessian matrix \(H({\mathbf {x}}(t),t)\) that we consider in this paper, the above zeroing dynamic (6) can be rewritten as the classic ZNN model [21]
\({\mathbf {x}}(t)\) represents the state of ZNN model (10) corresponding to \({\mathbf {x}}^*(t)\in {\mathbb {R}}^{n}\) of time-varying nonlinear optimization problem (1), which starts from a randomly generated initial vector \({\mathbf {x}}(0)\in {\mathbb {R}}^{n}\). In addition, for the convenience of comparison, the gradient neural network model [19] which called GD-1 for short is described as follows:
where \(\varepsilon =\Vert \partial f({\mathbf {x}})/\partial {\mathbf {x}} \Vert _2^2/2\in {\mathbb {R}}\) is defined as a norm-based energy function. Besides, another GD model, termed GD-2 model, is formulated with \(\varepsilon = f({\mathbf {x}})\) as follows:
To overcome the sensitivity of ZNN model with noises, NSZND model is proposed, analyzed and investigated in the next subsection.
2.3 NSZND model and NSZNN model
To solve time-varying nonlinear optimization problem (1) efficiently in real time and robustly in spite of measurement noises, noise-suppressing zeroing neural dynamic (NSZND) model is defined as
where \(\gamma >0\) and \(\lambda >0\) are two positive scalars. In fact, the NSZND model (13) can be obtained as a second-order linear system. Specifically, assume that \(\epsilon (t)=\int _0^t{{{\mathbf {e}}}}(\tau )\text {d}\tau \) and \({\dot{\epsilon }}(t), \ddot{\epsilon }(t)\) be the first and second derivative of \(\epsilon (t)\), respectively. Thereby, NSZND model (13) can be described as a second-order linear system,
To design parameters \(\gamma \) and \(\lambda \) of (14), the location of characteristic roots can be manually set on the left half-plane, which guarantees the convergence property of the second-order linear system. In addition, NSZND model (13) provides a feedback mechanism driving vector-valued error function \({{\mathbf {e}}}(t)\) to zero. The left side of (13) denotes the changing rate of \({{\mathbf {e}}}(t)\), and the right side of (13) includes the negative feedback term \(-\gamma {{\mathbf {e}}}(t)\) and the penalty term \(\lambda \int _0^t{{{\mathbf {e}}}}(\tau )\text {d}\tau \), which can dynamically drive the vector-valued error function \({{\mathbf {e}}}(t)\) to zero. By expanding NSZND model (13), the following noise-suppressing zeroing neural network (NSZNN) model is generalized as
For further investigation on the robustness of NSZNN model (15) under the disturbance of unknown measurement noises, the following noise-polluted NSZNN model can be obtained as
where \(\eta (t)\in {\mathbb {R}}^n\) means the vector-form measurement noises.
Remark 1
Assume that \(H({\mathbf {x}}(t),t)\) is a positive definite Hessian matrix, NSZNN model (15) derived from NSZND model (13) generates a unique solution, which corresponds to the exact solution to time-varying nonlinear optimization problem (1).
2.4 A control-theoretic approach for the time-varying nonlinear optimization problem
Owing to the viewpoint of control, the vector-form error function \({{\mathbf {e}}}(t)\) can be generalized as a measurable distance between the time-varying state variable \({{\mathbf {x}}}(t)\) and the time-varying optimization solution \({{\mathbf {x}}}^*(t)\). In other words, if the time-varying state variable \({{\mathbf {x}}}(t)\) of (7) sufficiently converges to the theoretically time-varying optimization solution \({{\mathbf {x}}}^*(t)\) when \(t\rightarrow \infty \), the vector-form error function \({{\mathbf {e}}}(t)\) is driven to zero. Therefore, the goal of this paper is to construct a nonlinear control system composed of the state variable \({{\mathbf {x}}}(t)\), a controller input \({\text {u}}(t)\), and the error output function \({{\mathbf {e}}}(t)\). The controller \({\text {u}}(t)\) is supposed to drive \({{\mathbf {e}}}(t)\) to zero according to the suitable control law which is demonstrated in the following control system.
That is, as \({{\mathbf {e}}}(t)\) approximates zero, the state variable \({{\mathbf {x}}}(t)\) converges to the time-varying optimization solution \({{\mathbf {x}}}^*(t)\). Thereby, the time-varying nonlinear optimization problem (1) can be transformed into a nonlinear control problem.
Furthermore, NSZNN model (15) can be reconsidered as a generalized proportional-integral-derivative (PID) controller from the control-theoretic viewpoint. To this end, as shown in Fig. 1, \(\dot{{\mathbf {g}}}_t({\mathbf {x}}(t),t)\) denotes the derivative part, and \(\lambda \int _0^t{{\mathbf {g}}}({\mathbf {x}}(\tau ),\tau )\text {d}\tau \) signifies the integral part, respectively, where \(\varPsi (\cdot )\) represents the activation function that will be discussed in the next section.
Having constructed the control system (17), the convergence performance of vector-form error function \({{\mathbf {e}}}(t)\) and NSZND model (13) will be discussed in the following section as well as how noise-polluted NSZNN model (16) carries out in the presence of measurement noises.
3 Theoretical analyses and results
This section follows on from the previous formulation and outlines theoretical analyses on the convergence characteristic of NSZNN model (16) with different kinds of measurement noises disturbing.
3.1 Convergence of NSZND model and NSZNN model
For classical ZNN model (10) designed for time-varying nonlinear optimization problem without noises, it has been verified that the time-varying state variable \({{\mathbf {x}}}(t)\) globally converges to the exact solution \({{\mathbf {x}}}^*(t)\) [21]. Owing to solving time-varying nonlinear optimization problem (1) with different noises, NSZNN model (15) is an equivalent expansion of NSZND model (13), theoretical analyses on which is deduced thoroughly as follows.
Theorem 1
Consider time-varying nonlinear optimization problem (1). Assume that Hessian matrix\(H({\mathbf {x}}(t),t)\)is positive definite, then time-varying state variable\({\mathbf {x}}(t)\in {\mathbb {R}}^{n}\)of NSZNN model (15), starting from randomly generated initial state\({\mathbf {x}}(0)\in {\mathbb {R}}^{n}\), converges to the optimal solution to time-varying nonlinear optimization problem (1) as\(t\rightarrow +\infty \).
Proof
The Lyapunov function candidate can be obtained as
where \({\hat{g}}_{j}({\mathbf {x}}(t),t)\) is the jth element of \(\int _0^t{{\mathbf {g}}}({\mathbf {x}}(\tau ),\tau )\text {d}\tau \). Therefore, it guarantees the positive-definiteness of Lyapunov function \(V_1({\mathbf {x}}(t),t)\), that is, \(V_1({\mathbf {x}}(t),t)>0\) for any \(g_{j}({\mathbf {x}}(t),t)\ne 0\), and \({V_1}({\mathbf {x}}(t),t)=0\) only for each element \(g_{j}({\mathbf {x}}(t),t)={\hat{g}}_{j}({\mathbf {x}}(t),t)=0\), with \(j\in \{1,2,\cdots ,n\}\). Considering the equation (13), the time derivative of \({V}_1({\mathbf {x}}(t),t)\) along the element trajectories of noise-polluted NSZNN model (15) becomes
In light of the Lyapunov theory in research [24], it can be guaranteed that if \({\dot{V}}_1({\mathbf {x}}(t),t)\le 0\) and \(V_1({\mathbf {x}}(t),t)\geqslant 0\), the time derivative \({\dot{V}}_1({\mathbf {x}}(t),t)\) of Lyapunov function candidate is negative-definiteness. That is to say, the Lyapunov function candidate (18) is a positive definite function and its time derivative (19) is negative definite, which means the time-varying state variable \({\mathbf {x}}(t)\in {\mathbb {R}}^{n}\) of NSZNN model (15) converges to the time-varying optimal solution \({\mathbf {x}}^*(t)\in {\mathbb {R}}^{n}\) to time-varying nonlinear optimization problem (1) as \(t\rightarrow +\infty \). The proof is thus completed. \(\square \)
In order to further investigate analytical results on time-varying nonlinear optimization problem (1) solved by NSZNN model (15), the theorem is developed as follows.
Theorem 2
NSZND model (13) for online time-varying nonlinear optimization problem (1) globally and exponentially converges to zero.
Proof
Given that \(\epsilon (t)=\int _0^t{{{\mathbf {e}}}}(\tau )\text {d}\tau \) and \({\dot{\epsilon }}(t), \ddot{\epsilon }(t)\) be the first and second derivatives of \(\epsilon (t)\). Therefore, NSZND model (13) can be obtained as the following second-order linear system,
Moreover, the characteristic values of second-order linear system (20) can be computed as
and
Furthermore, the initial conditions of formula (20) i.e., \(\epsilon (0)=0\) and \({\dot{\epsilon }}(0)=0\) should satisfy the following equations. Therefore, the theoretical solutions of second-order linear system (20) fall into the following three cases.
-
1.
If \(\gamma ^2>4\lambda \), two different characteristic values can be obtained, which satisfy the following inequality \(\chi _1\ne \chi _2\). Furthermore, as both \(\chi _1\) and \(\chi _2\) are real numbers, we can obtain
$$\begin{aligned} \epsilon _i(t)=\frac{e_i(0)[\exp (\chi _1t)-\exp (\chi _2t)]}{\sqrt{\gamma ^2-4\lambda }} \end{aligned}$$(23)and
$$\begin{aligned} e_i(t)=\frac{e_i(0)[\chi _1\exp (\chi _1t)-\chi _2\exp (\chi _2t)]}{\sqrt{\gamma ^2-4\lambda }}. \end{aligned}$$(24)Moreover, the vector-form error can be described as
$$\begin{aligned} {{\mathbf {e}}}(t)=\frac{{{\mathbf {e}}}(0)[\chi _1\exp (\chi _1t)-\chi _2 \exp (\chi _2t)]}{\sqrt{\gamma ^2-4\lambda }}. \end{aligned}$$(25) -
2.
If \(\gamma ^2=4\lambda \), two equally characteristic values can be computed, which satisfy the following equality \(\chi _1=\chi _2\). Thus, the following equation can be generalized as
$$\begin{aligned} \epsilon _i(t)=e_i(0)t\exp (\chi _1t). \end{aligned}$$(26)In addition, the vector-form error can be obtained as
$$\begin{aligned} {{\mathbf {e}}}(t)={{\mathbf {e}}}(0)\exp (\chi _1t)+\chi _1{{\mathbf {e}}}(0)\exp (\chi _1t), \end{aligned}$$(27)where \(\chi _1=\chi _2=-\gamma /2\).
-
3.
If \(\gamma ^2<4\lambda \), two different conjugate complex values can be generalized, which are \(\chi _1=\alpha +\text {i}\beta \) and \(\chi _2=\alpha -\text {i}\beta \). Therefore, the following equations can be obtained as
$$\begin{aligned} \epsilon _i(t)=\frac{e_i(0)\sin (\beta {t})\exp (\alpha {t})}{\beta } \end{aligned}$$(28)and
$$\begin{aligned} {{\mathbf {e}}}(t)={{\mathbf {e}}}(0)\exp (\alpha {t})\left[ \frac{\alpha \sin (\beta {t})}{\beta }+\cos (\beta {t})\right] , \end{aligned}$$(29)where \(\alpha =-\gamma /2\) and \(\beta =\sqrt{4\lambda -\gamma ^2}/{2}.\)
Summarizing the above analyses of the three situations and the rigorous proof of Theorem 1 in [29], it infers that the error-monitoring function \({{\mathbf {e}}}(t)\) of NSZND model (13) for online time-varying nonlinear optimization problem (1) globally and exponentially converges to zero. The proof is thus completed. \(\square \)
Remark 2
The time-varying state variable \({\mathbf {x}}(t)\) of NSZNN model (15) exponentially converges to the exact solution to time-varying nonlinear optimization problem (1) with the first n elements constituting the time-varying optimal solution.
3.2 The convergence property of NSZNN model with the presence of different measurement noises
In this subsection, to analyze the performance of NSZNN model (15) dealing with different measurement noises categorized by constant noises, linear time-varying noises, and random noises for time-varying nonlinear optimization problem (1), the following three significant theorems are given.
Theorem 3
The solution generated by noise-polluted NSZNN model (16) globally converges to the time-varying optimal solution to (7) no matter how large the unknown vector-form constant noise\(\eta (t)={\tilde{\eta }}\in {\mathbb {R}}^n\)is. Furthermore, the state vector constituted by the firstnth elements of\({{\mathbf {x}}}(t)\)globally converges to the time-varying optimization solution to time-varying nonlinear optimization problem (1).
Proof
With the aid of Laplace transformation [30], the ith subsystem of noise-polluted modified zeroing neural network model (16) leads to
It can be directly computed that
thereby, the transfer function can be obtained as \( \kappa /(\kappa ^2+\kappa \gamma +\lambda )\). Furthermore, the poles of the transfer function are \(\kappa _1=(-\gamma +\sqrt{\gamma ^2-4\lambda })/2\) and \(\kappa _2=(-\gamma -\sqrt{\gamma ^2-4\lambda })/2\). In addition, due to \(\gamma , \lambda >0\in {\mathbb {R}}\), the poles of transfer function are located on the left half-plane, which indicates that the time-varying nonlinear optimization problem (1) is stable. Moreover, because the noise is constant, it is worth noticing that \(\eta _i(\kappa )={\hat{\eta }}_i/\kappa \), where \(i=1,2, \cdot \cdot \cdot , n\). Using the final value theorem [30] to (31), the following equation can be got
Therefore, it can be obtained that \(\lim _{t\rightarrow \infty }\Vert {{\mathbf {e}}}(t)\Vert _2=0\), the proof is thus completed. \(\square \)
Theorem 4
With linear noise\(\eta (t)={\hat{\eta }}t\in {\mathbb {R}}^n\)disturbing, the time-varying state vector\({{\mathbf {x}}}(t)\)of noise-polluted NSZNN model (16) globally converges to the time-varying optimal solution\({{\mathbf {x}}}^*(t)\)to time-varying nonlinear optimization problem (1), where the upper bound of the vector-form error\(\lim _{t\rightarrow \infty }\Vert {{\mathbf {e}}}(t)\Vert _2\) is \(\Vert {\hat{\eta }}\Vert _2/\lambda \)which approximates zero as\(\lambda \)approximates positive infinity.
Proof
As \(\eta (t)={\hat{\eta }}t\) denotes the linear noise, using the Laplace transformation [30] to the ith subsystem of formula (16) yields
where \(\eta _i/\kappa ^2\) is the Laplace transformation of \({\hat{\eta }}_it\) and \(i=1,2,\cdot \cdot \cdot ,n\). According to the final value theorem [30], the previous equation can be generalized as
Thereby, it can be obtained that if \(\lambda \rightarrow \infty \), \(\lim _{t\rightarrow \infty }\Vert {{\mathbf {e}}}(t)\Vert _2={\Vert {\hat{\eta }}\Vert _2}/{\lambda }\rightarrow 0\). The proof is thus completed. \(\square \)
Theorem 5
Constrained by the bounded vector-form random noise\(\eta (t)=\theta (t)\in {\mathbb {R}}^n\), the supremum of the vector-form error\(\Vert {{\mathbf {e}}}(t)\Vert _2\)generated by noise-polluted NSZNN model (16) for solving time-varying nonlinear optimization problem (1) is\(2\varpi \frac{\sqrt{n}}{\sqrt{\gamma ^2-4\lambda }}\)for\(\gamma ^2>4\lambda \), or\(4\lambda \varpi \frac{\sqrt{n}}{\gamma \sqrt{\gamma ^2-4\lambda }}\)for\(\gamma ^2<4\lambda \), where\(\varpi =\max _{1\le {i}\le {n}}\{\max _{0\le \tau \le {t}}|\theta _i(\tau )|\}\)and\(\theta _i(t)\)means theith element of\(\theta (t)\), which can arbitrarily small for parameter\(\gamma \)being enough large and parameter\(\lambda \)being proper.
Proof
Noise-polluted NSZND model (13) can be rewritten as
where \(\gamma>0\in {\mathbb {R}}, \lambda >0\in {\mathbb {R}}\) and \(\theta (t)\in {\mathbb {R}}^n\) denotes the random noise. The ith subsystem of (35) can be described as
where \(i=1,2,\cdot \cdot \cdot ,n\). According to the values of parameters \(\gamma \) and \(\lambda \), the theoretical analyses can be fallen into the following three cases.
-
1.
If \(\gamma ^2>4\lambda \), the ith subsystem of the time-varying optimization solution to (36) can be generalized as follows
$$\begin{aligned} e_i(t)= & {} \frac{e_i(0)[\mu _1\exp (\mu _1t)-\mu _2\exp (\mu _2t)]}{\mu _1-\mu _2} \\&+\left\{ \int _0^t[\mu _1\exp (\mu _1(t-\tau )) \right. \\&\left. -\,\mu _2\exp (\mu _2(t-\tau ))]\theta _i(\tau )\text {d}\tau \right\} \frac{1}{\mu _1-\mu _2}, \end{aligned}$$(37)where \(\mu _1=(-\gamma +\sqrt{\gamma ^2-4\lambda })/2\) and \(\mu _2=(-\gamma -\sqrt{\gamma ^2-4\lambda })/2\). Furthermore, owing to the triangle inequality, the following inequality can be obtained as
$$\begin{aligned} |e_i(t)|\le & {} \frac{|e_i(0)[\mu _1\exp (\mu _1t)-\mu _2\exp (\mu _2t)]|}{\mu _1-\mu _2} \\&+\left\{ \int _0^t|\mu _1\exp (\mu _1(t-\tau ))\right. \\&\left. -\,\mu _2\exp (\mu _2(t-\tau ))|\max _{0\le \tau \le t}|\theta _i(\tau )|\text {d}\tau \right\} \frac{1}{\mu _1-\mu _2}. \end{aligned}$$(38)Therefore, the following supremum of vector-form error can be described as
$$\begin{aligned} \lim _{t\rightarrow \infty }\sup \Vert {{\mathbf {e}}}(t)\Vert _2\le 2\pi \frac{\sqrt{n}}{\sqrt{\gamma ^2-4\lambda }}, \end{aligned}$$(39)where \(\varpi =\max _{1\le {i}\le {n}}\{\max _{0\le \tau \le {t}}|\theta _i(\tau )|\}\).
-
2.
If \(\gamma ^2=4\lambda \), the ith subsystem of the time-varying optimization solution to (36) can be computed as follows
$$\begin{aligned} e_i(t)=e_i(0)\exp (\mu _1t)(t\mu _1+1)+\int _0^t[(t-\tau )\mu _1+1]\exp [\mu _1(t-\tau )] \theta _i(\tau )\text {d}\tau , \end{aligned}$$(40)where \(\mu _1=\mu _2=-\gamma /2\). Owing to the proof of Theorem 1 in [29], let \(\sigma _1>0\in {\mathbb {R}}\) and \(\sigma _2>0\in {\mathbb {R}}\), the following inequality can be obtained
$$\begin{aligned} |\mu _1|\tau \exp (\mu _1)\le \sigma _1\exp (-\sigma _2t). \end{aligned}$$(41)According to the triangle inequality theorem, the following inequality can be generalized as
$$\begin{aligned} |e_i(t)|\le & {} |e_i(0)\exp (\mu _1t)(t\mu _1+1)| \\&+\left( \frac{\sigma _1}{\sigma _2} -\frac{1}{\mu _1}\right) \max _{0\le \tau \le t}|\theta _i(\tau )|. \end{aligned}$$(42)Thereby, the following supremum of vector-form error can be described
$$\begin{aligned} \lim _{t\rightarrow \infty }\sup \Vert {{\mathbf {e}}}(t)\Vert _2\le \left( \frac{\sigma _1}{\sigma _2}-\frac{1}{\mu _1}\right) \pi \sqrt{n}, \end{aligned}$$(43)where \(\varpi =\max _{1\le {i}\le {n}}\{\max _{0\le \tau \le {t}}|\theta _i(\tau )|\}\).
-
3.
If \(\gamma ^2<4\lambda \), the ith subsystem of the time-varying optimization solution to (36) can be described as follows
$$\begin{aligned} e_i(t)= & {} e_i(0)\exp (\delta _1t)\frac{\delta _1\sin (\delta _2t)}{\delta _2} \\&+\int _0^t\frac{\delta _1\sin (\delta _2(t-\tau ))\exp (\delta _1(t-\tau ))}{\delta _2}\theta _i(\tau )\text {d}\tau \\&+\cos (\delta _2t)+\int _0^t\cos (\delta _2(t-\tau )) \exp (\delta _1(t-\tau ))\theta _i(\tau )\text {d}\tau , \end{aligned}$$(44)
where \(\delta _1=-\gamma /2\) and \(\delta _2=\sqrt{4\lambda -\gamma ^2}/2\). Using the triangle inequality theorem to the previous equation, it can be computed that
Moreover,
In addition, the following supremum of vector-form error can be described as
where \(\varpi =\max _{1\le {i}\le {n}}\{\max _{0\le \tau \le {t}}|\theta _i(\tau )|\}\). Therefore, the proof is thus completed. \(\square \)
4 GNSZNN model
To improve the efficiency of the proposed NSZND model (13) and NSZNN model (15), the following section will discuss how the activation functions cooperate with them.
4.1 Formulation of GNSZNN model and different activation functions
To further investigate the performance of different activation functions, NSZND model (13) is extended to a general form for time-varying nonlinear optimization problem (1) as follows.
where \(\varPsi : {\mathbb {R}}^n\rightarrow {\mathbb {R}}^n\) with each element means a general monotonically increasing odd activation function, and the parameters \(\gamma >0\) and \(\lambda >0\) are two positive scalars. Generally speaking, the activation functions can be applied to accelerating the convergence of ZNN model (10) [31, 32]. Therefore, three types of monotonically increasing odd activation functions are analyzed and considered here.
-
1.
Bi-exponential activation function [24]:
$$\begin{aligned} \varPsi _i(e_i)=\exp (\varsigma e_i)-\exp (-\varsigma e_i), \end{aligned}$$(49)with parameter \(\varsigma =3\).
-
2.
Power-sigmoid activation function [24]:
$$\begin{aligned} \varPsi _i(e_i)=\left\{ \begin{array}{ll} e_i^p, &{} {\text {if}}\,|e_i|\ge 1,\\ \frac{1+\exp (-\xi )}{1-\exp (-\xi )}\frac{1-\exp (-\xi e)}{1+\exp (-\xi e_i)},&{} {\text {otherwise,}} \end{array}\right. \end{aligned}$$(50)with parameters \(\xi =4\) and \(p=3\).
-
3.
Power-sum activation function [24]:
$$\begin{aligned} \varPsi _i(e_i)=\sum _{k=1}^N{e_i}^{2k-1}, \end{aligned}$$(51)with integer parameter \(N=3\).
In fact, NSZNN model (15) is a special case of general model (48) when the activation function in equation (48) is a linear activation function, i.e., \(\varPsi _i(e_i)=e_i\).
4.2 Convergent property of GNSZNN model
Based on Eq. (48), general noise-suppressing zeroing neural network (GNSZNN) model can be described for the time-varying nonlinear optimization problem (1) as follows:
To further analyze and investigate the robustness of GNSZNN model (52) with different measurement noises, noise-polluted GNSZNN model is given
where \(\eta (t)\in {\mathbb {R}}^n\) denotes the vector-based measurement noises, which include constant noises, linear time-varying noises and random noises.
Theorem 6
Consider time-varying nonlinear optimization problem (1). Assume that Hessian matrix\(H({\mathbf {x}}(t),t)\)is positive definite, then time-varying state variable\({\mathbf {x}}(t)\in {\mathbb {R}}^{n}\)of noise-polluted GNSZNN model (53), where\(\varPsi _i: {\mathbb {R}}^n\rightarrow {\mathbb {R}}^n\)with each element means a general monotonically increasing odd activation function, starting from randomly generated initial state\({\mathbf {x}}(0)\in {\mathbb {R}}^{n}\), converges to the time-varying optimal solution to time-varying nonlinear optimization problem (1) as\(t\rightarrow +\infty \).
Proof
The Lyapunov function candidate can be obtained as
where \({\tilde{g}}_{j}({\mathbf {x}}(t),t)\) is the jth element of \(\int _0^t{{\mathbf {g}}}({\mathbf {x}}(\tau ),\tau )\text {d}\tau \). Therefore, it guarantees the positive-definiteness of Lyapunov function candidate \(V_2({\mathbf {x}}(t),t)\), that is, \(V_2({\mathbf {x}}(t),t)>0\) for any \(g_{j}({\mathbf {x}}(t),t)\ne 0\), and \({V}_2({\mathbf {x}}(t),t)=0\) only for each \(g_{j}({\mathbf {x}}(t),t)=0\) and \({\tilde{g}}_{j}({\mathbf {x}}(t),t)=0\), with \(j\in \{1,2,\cdots ,n\}\). Considering the equation (48), the time derivative \({\dot{V}}_2({\mathbf {x}}(t),t)\) along the element trajectories of noise-polluted GNSZNN model (53) becomes
Since the activation function \(\varPsi _i: {\mathbb {R}}^n\rightarrow {\mathbb {R}}^n\) is a monotonically increasing odd function, the following inequality can be obtained
Therefore, using the Lyapunov theory [24], it can be summarized and generalized that generally modified time-varying zeroing dynamic \({\mathbf {e}}(t)\) (48) globally converges to zero. That is, the time-varying state vector \({\mathbf {x}}(t)\) of noise-polluted GNSZNN model (53) globally converges to the theoretical solution to time-varying nonlinear optimization problem (1) as \(t\rightarrow +\infty \). The proof is thus completed.
To prove feasibility and efficiency of the proposed models, the illustrative examples are reported in the next section. \(\square \)
5 Numerical simulations
In this section, some numerical examples are considered to verify the superiority and efficacy of GNSZNN model (52) for solving the online time-varying nonlinear optimization problem (1). The simulations run in MATLAB version 2017b with a Microsoft Windows 7 Professional operating system containing a central processing unit of 3.20-GHz Inter(R)Core(TM)i5-6500, 4.0-GB memory.
5.1 GNSZNN model for solving online time-varying nonlinear optimization problem
To compare the convergent property of the proposed GNSZNN model (52), GD-1 model (11), GD-2 model (12) and ZNN model (10) when solving the online time-varying nonlinear optimization problem with different measurement noises, a numerical example is considered as follows
where \(\eta (t)\) means different measurement noises, i.e., constant noises, time-varying linear noises and random noises. Moreover, the aforementioned models are directly coded and simulated using MATLAB routine “ODE45” in this paper [33].
In Fig. 2, the noise of time-varying nonlinear minimization problem (57) is constant and the parameters of previous neural network models are \(\gamma =10\) and \(\lambda =50\). The time-varying state variable \({\mathbf {x}}(t)\) starts from an initial state vector \({\mathbf {x}}(0)=[0,\,4,\,-8,\,-6]^{\top }\). Particularly, elementary solutions of time-varying state variable \({\mathbf {x}}(t)\) of GNSZNN model (52) using the linear activation function array \(\varPsi _i(e_i)=e_i\) are demonstrated in Fig. 2a. It infers that GNSZNN model (52) is feasible and effective for solving online time-varying nonlinear minimization problem (57) with constant noises. The logarithmic time-varying residual error \(\log (\Vert {{\mathbf {e}}}(t)\Vert _{\mathrm{{2}}})\) is defined and simulated during the computing process as shown in Fig. 2b, where the time-varying residual error of GNSZNN model (52) rapidly converges, but other comparative models do not. Furthermore, Fig. 2b further illustrates that the convergent ratio of GNSZNN model (52) is highly superior to the other models. Therefore, the proposed GNSZNN model (52) is suitable to solve the real-time nonlinear optimization problem. Moreover, the minimal characteristic value of Hessian matrix \(H({\mathbf {x}}(t),t)\) is always larger than zero during the computing process as shown in Fig. 2c. That is to say, the Hessian matrix \(H({\mathbf {x}}(t),t)\) is not singular matrix during time interval [0, 10] s. Therefore, it means that the time-varying nonlinear minimization problem (57) can be solved by the GNSZNN model (52). In addition, Fig. 2d shows comparative results of \(f({\mathbf {x}}(t),t)\) computed by GNSZNN model (52), GD-1 model (11), GD-2 model (12) and ZNN model (10), where the minimization function \(f({\mathbf {x}}(t),t)\) computed by GNSZNN model (52) is smaller than those generated by the other methods, which reveals that the time-varying state variable \({\mathbf {x}}(t)\) of GNSZNN model (52) achieves the time-varying minimal value of the objective function depicted in time-varying nonlinear minimization problem (57) in real time.
In Fig. 3, under liner time-varying noise \(\eta (t)=t\) affecting, parameters of GNSZNN model (52) are set as \(\gamma =10\) and \(\lambda =50\). As shown in Fig. 3a, it infers that GNSZNN model (52) is feasible and effective for solving online time-varying nonlinear minimization problem (57) with liner time-varying noises. To be specific, as seen from Fig. 3b, the logarithmic time-varying residual error \(\log (\Vert {{\mathbf {e}}}(t)\Vert _{\mathrm{{2}}})\) of GNSZNN model (52) rapidly converges to zero within 2 s differing from that of GD-1 model (11), GD-2 model (12) and ZNN model (10) which are all of divergence. In addition, the minimal eigenvalue of Hessian matrix \(H({\mathbf {x}}(t),t)\) is also larger than zero during the solving process as shown in Fig. 3c. Figure 3d shows that the minimum solution \(f({\mathbf {x}}(t),t)\) computed by proposed GNSZNN model (52) is much smaller than those generated by the other methods. Therefore, we can naturally that the time-varying state variable \({\mathbf {x}}(t)\) of GNSZNN model (52) converges to the time-varying minimum solution of time-varying nonlinear minimization problem (57) with the presence of linear time-varying noises.
As shown in Fig. 4a, it is evident that GNSZNN model (52) can efficiently deal with random noises when solving online time-varying nonlinear minimization problem (57). Moreover, as seen from Fig. 4b, the logarithmic time-varying residual error \(\log (\Vert {{\mathbf {e}}}(t)\Vert _{\mathrm{{2}}})\) of GNSZNN model (52) drops to a tiny value, whereas other models including GD-1 model (11), GD-2 model (12) and ZNN model (10) fail to generate accurate solutions. Similarly, the minimal eigenvalue of Hessian matrix \(H({\mathbf {x}}(t),t)\) prove the applicability of GNSZNN model (52). Besides, Fig. 4d represents the comparison of objective function \(f({\mathbf {x}}(t),t)\) generated by all the comparative models for time-varying nonlinear minimization problem (57).
Therefore, from Figs. 2, 3 and 4, it can be seen that the proposed GNSZNN model (52) of this paper is more efficient and superior to the other classical approaches for time-varying nonlinear minimization problem (57) with different measurement noises.
5.2 GNSZNN model with different parameters and activation functions for online continuous-time nonlinear minimization problem
In this subsection, the following continuous-time nonlinear minimization problem with different measurement noises will be considered as a more complicated case, which generates from equation (1) in [34].
where \(\eta (t)\) means different measurement noises, i.e., constant noises, linear time-varying noises and random noises. Different parameters and activation functions are utilized to investigate and analyze the efficient and superiority of GNSZNN model (52) for online continuous-time nonlinear minimization problem (58) with different noises. In this paper, the dimension of (58) is \(n=10\). The corresponding numerical results, synthesized by GNSZNN model (52) starting with initial state \({\mathbf {x}}(0)=[0, 0, \cdot \cdot \cdot , 0]^\top \in {\mathbb {R}}^{10}\), are shown in Figs. 5, 6 and 7. As shown in Fig. 5a, different parameters \(\lambda \) are selected as \(\lambda =50, 100, 200, 500,1000\). It can be seen that the larger parameter \(\lambda \) is, the faster convergent ratio of the logarithmic time-varying residual error \(\log (\Vert {{\mathbf {e}}}(t)\Vert _{\mathrm{{2}}})\) can become. Therefore, the global/exponential convergence is verified through simulations for GNSZNN model (52) with different parameters. In addition, the convergent ratio can be manually set via adjusting the parameters \(\lambda \). If you want to accelerate the convergent rate of computational formula, the parameter \(\lambda \) should be chosen as a sufficient large number. As shown in Fig. 5b, it is also demonstrated that the larger parameter \(\lambda \) is, the smaller optimal value of objective function \(f({\mathbf {x}}(t),t)\) is.
In Fig. 6a, parameters \(\gamma \) are adopted as \(\gamma =50, 100, 200, 500,1000\). It can be seen that the larger parameter \(\gamma \) is, the faster convergent ratio of the logarithmic time-varying residual error \(\log (\Vert {{\mathbf {e}}}(t)\Vert _{\mathrm{{2}}})\) can be. Therefore, the global convergence is investigated via numerical simulations for GNSZNN model (52) with \(\gamma \) changing. In addition, the convergent rate can be accelerated through adopting the larger \(\gamma \). As shown in Fig. 6b, it is demonstrated that the larger parameter \(\gamma \) is, the smaller optimal value of objective function \(f({\mathbf {x}}(t),t)\) is. Overall, the convergent ratio of GNSZNN model (52) can be manually set via simultaneously adjusting the parameters \(\lambda \) and \(\gamma \).
What plots in Fig. 7a reveals that logarithmic time-varying residual error \(\log (\Vert {\mathbf {e}}(t)\Vert )\) of GNSZNN model (52) with the linear activation function is slightly larger than that with other activation functions, where the bi-exponential activation function and power-sigmoid activation function arrays can achieve relatively better performance for solving online time-varying nonlinear minimization problem (58). In addition, Fig. 7b shows the comparison of \(f({\mathbf {x}}(t),t)\) generated by GNSZNN model (52) with different activation functions where the one with bi-exponential activation function is smaller than those accompanied with other activation functions.
6 Conclusions and future works
In this paper, from a control viewpoint, GNSZNN model (52) is developed with the aid of classical ZNN model (10), showing robust performance in noisy workspace and high accuracy with time varying for online time-varying nonlinear optimization problems, which has been thoroughly proved to have global/exponential convergence property. In addition, collaborating with different monotonically increasing odd activation functions, the convergence of GNSZNN model (52) has been accelerated. Besides, simulative and numerical results further illustrated the efficacy and advantages of GNSZNN model (52) for time-varying nonlinear optimization problems.
Furthermore, GNSZNN model (52) may open a door to the performance improvement of the related applications, such as redundant manipulator [35, 36], rehabilitation robot [37,38,39], trajectory planning [40, 41] and time-varying problem [42], with the great capacity in tolerating noises and computing accuracy. Moreover, since Hessian matrix involved in the GNSZNN model (52) is required to be invertible in the online solution process of the time-varying nonlinear optimization problem, one of our future research directions is the investigation of new models with a singular Hessian matrix.
References
Yang Y, Zhang Y (2013) Superior robustness of power-sum activation functions in Zhang neural networks for time-varying quadratic programs perturbed with large implementation errors. Neural Comput Appl 22:175–185
Zhang Y, Yang Y, Cai B, Guo D (2012) Zhang neural network and its application to Newton iteration for matrix square root estimation. Neural Comput Appl 21:453–460
Andrei N (2018) An adaptive scaled BFGS method for unconstrained optimization. Numer Algorithms 77(2):413–432
Abubakar AB, Kumam P (2019) A descent Dai-Liao conjugate gradient method for nonlinear equations. Numer Algorithms 81(1):197–210
Sun ZB, Tian YT, Wang J (2018) A novel projected Fletcher–Reeves conjugate gradient approach for finite-time optimal robust controller of linear constraints optimization problem: application to bipedal walking robots. Optim Control Appl Methods 39(1):130–159
Sun ZB, Sun YY, Li Y, Liu KP (2019) A new trust region-sequential quadratic programming approach for nonlinear systems based on nonlinear model predictive control. Eng Optim 51(6):1071–1096
Jin L, Li S, La H, Zhang X, Hu B (2019) Dynamic task allocation in multi-robot coordination for moving target tracking: a distributed approach. Automatica 100:75–81
Jin L, Li S, Luo X, Li Y, Qin B (2018) Neural dynamics for cooperative control of redundant robot manipulators. IEEE Trans Ind Inform 14:3812–3821
Livieris IE, Tampakas V, Pintelas P (2018) A descent hybrid conjugate gradient method based on the memoryless BFGS update. Numer Algorithms 79(4):1169–1185
Andrei N (2018) A Dai-Liao conjugate gradient algorithm with clustering of eigenvalues. Numer Algorithms 77(4):1273–1282
Dai YH, Liao LZ (2001) New conjugacy conditions and related nonlinear conjugate gradient methods. Appl. Math Optim 43(1):87–101
Andrei N (2013) On three-term conjugate gradient algorithms for unconstrained optimization. Appl Math Comput 219:6316–6327
Liu JK, Li SJ (2014) New three-term conjugate gradient method with guaranteed global convergence. Int J Comput Math 91(8):1744–1754
Sun ZB, Li HY, Wang J, Tian YT (2018) Two modified spectral conjugate gradient methods and their global convergence for unconstrained optimization. Int J Comput Math 95(10):2082–2099
Huang XJ, Cui BT (2018) A neural dynamic system for solving convex nonlinear optimization problems with hybrid constraints. Neural Comput Appl 31:6027–6038. https://doi.org/10.1007/s00521-018-3422-4
Jin L, Zhang YN, Qiu BB (2018) Neural network-based discrete-time Z-type model of high accuracy in noisy environments for solving dynamic system of linear equations. Neural Comput Appl 29:1217–1232
Li S, Cui H, Li Y, Liu B, Lou Y (2013) Decentralized control of collaborative redundant manipulators with partial command coverage via locally connected recurrent neural networks. Neural Comput Appl 23:1051–1060
Xie Z, Jin L, Du X, Xiao X, Li H, Li S (2019) On generalized RMP scheme for redundant robot manipulators aided with dynamic neural networks and nonconvex bound constraints. IEEE Trans Ind Inform 15:5172–5181. https://doi.org/10.1109/TII.2019.2899909
Liao L, Qi H, Qi L (2004) Neurodynamical optimization. J Global Optim 28(2):175–195
Jin L, Li S (2017) Nonconvex function activated zeroing neural network models for dynamic quadratic programming subject to equality and inequality constraints. Neurocomputing 267:107–113
Jin L, Zhang YN (2016) Continuous and discrete Zhang dynamics for real-time varying nonlinear optimization. Numer Algorithms 73(1):115–140
Qi YM, Jin L, Wang YN, Xiao L, Zhang JL (2019) Complex-valued discrete-time neural dynamics for perturbed time-dependent complex quadratic programming with applications. IEEE Trans Neural Netw Learn Syst. https://doi.org/10.1109/TNNLS.2019.2944992
Wei L, Jin L, Yang CG, Chen K, Li WB (2019) New noise-tolerant neural algorithms for future dynamic nonlinear optimization with estimation on Hessian matrix inversion. IEEE Trans Syst Man Cybern Syst. https://doi.org/10.1109/TSMC.2019.2916892
Jin L, Zhang YN, Li S, Zhang YY (2016) Modified ZNN for time-varying quadratic programming with inherent tolerance to noises and its application to kinematic redundancy resolution of robot manipulators. IEEE Trans Ind Electron 63(11):6978–6988
Zhang Z, Zheng L, Li L, Deng X, Xiao L, Huang G (2018) A new finite-time varying-parameter convergent-differential neural-network for solving nonlinear and nonconvex optimization problems. Neurocomputing. https://doi.org/10.1016/j.neucom.2018.07.005
Huang B, Hui G, Gong D, Wang ZS, Meng XP (2014) A projection neural network with mixed delays for solving linear variational inequality. Neurocomputing 125(11):28–32
Zhang S, Xia Y, Zheng W (2015) A complex-valued neural dynamical optimization approach and its stability analysis. Neural Netw 61:59–67
Zhang Y, Guo D (2015) Zhang functions and various models. Springer, Berlin
Zhang Z, Zhang YN (2013) Design and experimentation of acceleration-level drift-free scheme aided by two recurrent neural networks. IET Control Theory Appl. 7:25–42
Oppenheim AV, Willsky AS (1997) Signals and systems. Prentice-Hall, Englewood Cliffs
Zhang Y, Li Z (2009) Zhang neural network for online solution of time-varying convex quadratic program subject to time-varying linear-equality constraints. Phys Lett A 373(18–19):1639–1643
Zhang Y, Yi C (2011) Zhang neural networks and neural-dynamic method. Nova Science Publishers, Hauppauge
Mathews JH, Fink KD (2005) Numerical methods using MATLAB. Prentice-Hall Inc, Englewood Cliffs
Martínez JM, Prudente LF (2012) Handling infeasibility in a large-scale nonlinear optimization algorithm. Numer Algorithms 60(2):263–277
Jin L, Li S, Xiao L, Lu RB, Liao BL (2018) Cooperative motion generation in a distributed network of redundant robot manipulators with noises. IEEE Trans Syst Man Cybern Syst 48(10):1715–1724
Jin L, Zhang YN (2015) Discrete-time Zhang neural network for online time-varying nonlinear optimization with application to manipulator motion generation. IEEE Trans Neural Netw Learn Syst 26(7):1525–1531
Zhang J, Fiers P, Witte KA et al (2017) Human-in-the-loop optimization of exoskeleton assistance during walking. Science 356:1280–1284
Rifaï H, Mohammed S, Djouani K, Amirat Y (2017) Toward lower limbs functional rehabilitation through a knee-joint exoskeleton. IEEE Trans Control Syst Technol 25:712–719
Wang WQ, Hou ZG, Cheng L, Tong LN, Peng L, Tan M (2016) Toward patients’ motion intention recognition: dynamics modeling and identification of iLeg—an LLRR under motion constraints. IEEE Trans Syst Man Cybern Syst 46:980–992
Shen P, Zhang X, Fang Y (2018) Complete and time-optimal path-constrained trajectory planning with torque and velocity constraints: theory and applications. IEEE/ASME Trans Mech 23:735–746
Zhang X, Chen X, Farzadpour F, Fang Y (2018) A visual distance approach for multi-camera deployment with coverage optimization. IEEE/ASME Trans Mech 23:1007–1018
Sun ZB, Li F, Zhang BC, Sun YY, Jin L (2019) Different modified zeroing neural dynamics with inherent tolerance to noises for time-varying reciprocal problems: a control-theoretic approach. Neurocomputing 337:165–179
Funding
The work is supported in part by the National Natural Science Foundation of China under Grants 61873304, 11701209 and 51875047, and also in part by the China Postdoctoral Science Foundation Funded Project under Grant 2018M641784, 2019T120240 and also in part by the Key Science and Technology Projects of Jilin Province, China, Grant Nos. 20190302025GX, 20170204067GX, and 20180201105GX and also in part by the Industrial Innovation Special Funds Project of Jilin Province, China, Grant No. 2018C038-2 and also in part by the Jilin Engineering Laboratory for Intelligence Robot and Visual Measurement Technology, Grant No. 2019C010 and also in part by the Fundamental Research Funds for the Central Universities (No. lzujbky-2019-89).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that there is no conflict of interests regarding the publication of this paper.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Sun, Z., Shi, T., Wei, L. et al. Noise-suppressing zeroing neural network for online solving time-varying nonlinear optimization problem: a control-based approach. Neural Comput & Applic 32, 11505–11520 (2020). https://doi.org/10.1007/s00521-019-04639-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-019-04639-2