Abstract
In recent years, a projection neural network was proposed for solving linear variational inequality (LVI) problems and related optimization problems, which required the monotonicity of LVI to guarantee its convergence to the optimal solution. In this paper, we present a new result on the global exponential convergence of the projection neural network. Unlike existing convergence results for the projection neural network, our main result does not assume the monotonicity of LVI problems. Therefore, the projection neural network can be further guaranteed to solve a class of non-monotone LVI and non-convex optimization problems. Numerical examples illustrate the effectiveness of the obtained result.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
Consider the following linear variational inequality (LVI) problem: find a vector \(x^{*}\in\Upomega\) such that
where \(M \in {R^{n \times n}}, q \in {R^n},\) the feasible domain \(\Upomega=\{x \in R^n \mid l \leq x \leq h\}. \) If M is positive semidefinite or positive definite matrix, the problem (1) is called the monotone LVI or strictly monotone LVI.
LVI problems are of crucial importance in mathematical programming and enjoy a wide variety of science and engineering applications. Many constrained optimization problems, such as linear complementary problems and linear-quadratic programming problems, can be equivalently converted into LVI problems [1, 2]. Various numerical algorithms for solving LVI problems, such as the descent method [3] and the projection-type method [4], have been studied for decades.
In recent years, recurrent neural networks have found successful applications in signal processing, pattern recognition, associative memory, and other engineering or scientific fields [5–10, 26–28]. Moreover, due to its inherent nature of parallelization and distributed information process, recurrent neural networks are promising computational models for realtime applications. During the recent decade, some neural networks have been proposed for solving LVI problems and related optimization problems [11–22]. Among them, the following projection neural network was developed and investigated for solving LVI (1) in [11–13]:
where λ > 0, α > 0 are two scaling factors and \(P_{\Upomega}: R^n\rightarrow \Upomega\) is a projection operator defined by \(P_{\Upomega}(x)=(P_{\Upomega}(x_1),\ldots,P_{\Upomega}(x_n))^T\) with
where \(l=[l_1,\ldots,l_n]^T\) and \(h=[h_1,\ldots,h_n]^T. \) Note that x * is a solution of problem (1) if and only if it is an equilibrium point of the above neural network.
As was shown in [13], the projection neural network (2) features a very low complexity and superior than many other neural networks for solving LVI and related problems. The global convergence of the projection neural network has been well studied in [11–13]; however, it requires very restrictive conditions that the LVI is monotone or strictly monotone, or equivalently, M is positive semidefinite or positive definite. As it is known, there exist many cases that the LVI problems are non-monotone and related optimization problems are non-convex in real-world applications [23–25]. Therefore, reducing the convergence condition of the projection neural networks (2) is very desirable for wide engineering applications, and there are some researchers working on this issue in these years [14, 15, 17, 18]. For example, in [15], Hu et al. proved that the projection neural networks can also solve LVI problems by requiring that the LVI is pseudo-monotone, which is a weaker condition than monotonicity. In recent years, by introducing the time delays into the projection neural network model (2), several delayed neural networks [14, 17, 18] were proposed to solve the LVI (1), by which a class of non-monotone LVI can be solved under some assumptions. However, the introduction of time delays inevitably increases the complexity of the neural networks, which is a critical concern in hardware implementation.
Motivated by the above discussions, this paper presents a new global convergence condition for the projection neural network (2) to solve the LVI and related problems. By constructing a new Lyapnuov–Krasovskii functional, a linear matrix inequality (LMI) approach is developed to establish a sufficient condition for the projection neural network (2) to be globally exponentially convergent to the optimal solution. By the proposed LMI approach, the monotonicity assumption on the LVI is no longer necessary. Therefore, the obtained result guarantees that the projection neural network (2) can also solve effectively a class of non-monotone LVI and non-convex optimization problems.
2 Preliminaries
For the sake of later discussion, a definition and two lemmas and some notations are introduced in this section. In what follows, I denotes the unity matrix with suitable dimension; \(\lambda_{{\rm max}}(\cdot)\) and \(\lambda_{{\rm min}}(\cdot)\) represent the maximum and minimum eigenvalues of the given matrix, respectively; for a vector \(\phi \in R^n, \parallel \phi \parallel\) denotes the vector l 2-norm; \(diag(\cdot)\) denotes a diagonal matrix formed from its vector argument.
Definition 1
The projection neural network (2) is said to be globally exponentially stable at the equilibrium point x * if every trajectory x(t) of (2) with the initial point \(x(t_0)\in R^n\) satisfies that:
where k 0 and η are positive constants independent of the initial point.
Lemma 1
[14] The projection operator \(P_{\Upomega}\) satisfies the following inequality for any \(x, y \in R^n\):
and
Lemma 2
[11] For each \(x(t_0)\in R^n,\) there exists a unique continuous solution x(t) for the projection neural network (2) over \(t\in[t_0, \infty)\).
3 Main results
In this section, we analyze the global stability and convergence of the projection neural network (2) and obtain a new condition for its global exponential convergence to the optimal solution of problem (1).
Firstly, we reformulate the projection neural network (2) as follows:
where A = (a ij ) n × n = I − α M and \(b = (b_1, b_2, \ldots, b_n)^T =-\alpha q.\)
Then, let x * be the equilibrium point of the neural network (4) and \(a_i=[a_{i1}, a_{i2}, \ldots, a_{in}]. \) By coordinate transformation u(t) = x(t) − x *, we get the following system from (4):
where
and
Clearly, x * is globally exponentially stable for system (4) if and only if the zero solution of system (5) is globally exponentially stable.
Theorem 1
If there exist a positive definite symmetric matrix \(P\in R^{n\times n}\) and positive definite diagonal matrices \(D = diag\{ {d_1}, {d_2}, \ldots, {d_n}\}\) and \(L = diag\{ {l_1}, {l_2}, \ldots , {l_n}\} \in R^{n\times n}\) and a constant k > 0 such that the following LMI (6) holds, then the equilibrium point of the neural network (5) is globally exponentially stable.
Proof
Construct the following Lyapunov–Krasovskii functional:
From the definition of \(P_{\Upomega}, \) it is obvious that \(P_{\Upomega} [a_i (u(t) + x^* ) + b_i]\) is a non-decreasing function, so f i (a i u(t)) is also non-decreasing. In addition, it easy to see that f i (0) = 0, for \(i=1, 2, \ldots, n. \) Therefore, if a i u ≥ 0, then for 0 ≤ η ≤ a i u, it can be obtained that
and if a i u < 0, then for 0 ≥ η ≥ a i u, we obtain
Consequently, for any a i u, we can obtain that
Calculating the time derivative of V[u(t)] along the trajectory of (5) yields:
From Lemma 1, we can obtain that:
that is
For any positive definite diagonal matrix \(L = diag\{ l_{1} ,\:l_{2} ,\: \ldots \} ,\) the following inequality holds:
Substituting (10) into (9), it yields:
Let \(\upsilon ^T (t) = \left[ {u^T \,f^T (Au(t))} \right], \) then
where
Therefore, V(u(t)) ≤ V(u(t 0)) if (6) holds. Furthermore,
Meanwhile, it is easy to see that:
Therefore, from (12) and (13) we obtain,
where
By Lyapunov stability theory and Definition 1, the equilibrium point of neural network (5) is globally exponentially stable if (6) holds. This completes the proof.
Remark 1
Compared with the existing exponential convergence results in [11–13], Theorem 1 does not require that M is positive semidefinite or positive definite. It implies that the projection neural network (2) can solve a class of non-monotone LVI. Therefore, Theorem 1 extends the results obtained in [11–13]. Moreover, the condition in Theorem 1 is described in the LMI form, which is easy to check by many efficient LMI solvers.
Remark 2
Based on the neural network model (2), delayed projection neural networks were developed in [14, 17, 18], which can solve a class of non-monotone LVI (1). However, Theorem 1 shows that the projection neural network (2) can solve a class of non-monotone LVI problems without the need of introducing time delays. Moreover, the results obtained in [14, 17, 18] were under the assumption that I − αM is non-singular. In contrast, Theorem 1 has no such restriction.
4 Numerical simulations
In order to demonstrate the effectiveness of the projection neural network (2) for solving non-monotone LVI and non-convex optimization problems, two illustrative examples are presented in this section.
Example 1
[18] Consider the following third-order LVI problem, where
It is obvious that M is an asymmetric and indefinite matrix, which means that it is a non-monotone LVI problem, and this problem has a unique solution x = [−0.3343, −0.5775, 2.5532]T. Thus, the results in [11–13] and [15] are not applicable for this example.
Let k = 0.01, α = 1, after solving the LMI (6), feasible solution can be obtained as follows:
Therefore, Theorem 1 guarantee that the neural network (2) can globally exponentially converge to the solution of LVI (14). Figure 1 displays the transient behavior of (2) with ten random initial points. The simulation results show that all the trajectories are convergent to the optimal solution x * = [−0.3343, −0.5775, 2.5532]T.
Next, we will show that the network (2) can be applied to solve a class of non-convex optimization problems.
Example 2
Consider the following quadratic programming problem:
where
This is a non-convex quadratic programming problem, and its unique optimal solution is x * = [5, 5]T. Following a similar line of [14], problem (15) can be solved by the projection neural network (2), by letting
Let k = 0.1, α = 0.1, after solving the LMI (6), we can get the following feasible solutions:
According to Theorem 1, the neural network (2) is globally exponentially stable at its equilibrium point. Figure 2 displays the transient behavior of (2) with ten random initial points. The simulation results shows that all the trajectories are convergent to the optimal solution x * = [x 1, x 2]T = [5, 5]T.
5 Conclusion
In this paper, we proposed a new result on the exponential convergence of a projection neural network, which was originally proposed for solving monotone LVI and convex optimization problems. Without the monotonicity assumption on the LVI, we proved that the projection neural network is still globally exponentially convergent to the optimal solution under mild conditions. Therefore, the application scope of the projection neural network was extended to a class of non-monotone LVI and non-convex optimization problems. Simulation results demonstrated the effectiveness of the obtained result.
References
Bertsekas DP (1989) Parallel and distributed computation: numerical methods. Prentice-Hall, Upper Saddle River, NJ
Facchinei F, Pang JS (2003) Finite dimensional variational inequalities and complementarity problems. Springer, New York
He BS, Yang ZH, Yuan XM (2004) An approximate proximal-extragradient type method for monotone variational inequalities. J Math Anal Appl 300:362–374
Solodov MV, Tseng P (1996) Modified projection-type methods for monotone variational inequalities. SIAM J Cont Optim 34:1814–1830
Meng J, Wang X (2007) Robust anti-synchronization of a class of delayed chaotic neural networks. Chaos 17:023113
Meng J, Wang X (2008) Generalized projective synchronization of a class of delayed neural networks. Mod Phys Lett B 22:181–190
Lin D, Wang X, Wang L (2009) Controlling the multi-scroll chaotic attractors using fuzzy neural networks compensator. Chin J Phys 47:686–701
Wang X, Zhao J (2010) Cryptanalysis on a parallel keyed hash function based on chaotic neural network. Neurocomputing 73:3224–3228
Lin D, Wang X, Nian F, Zhang Y (2010) Dynamic fuzzy neural networks modeling and adaptive backstepping tracking control of uncertain chaotic systems. Neurocomputing 73:2873–2881
Lin D, Wang X (2011) Self-organizing adaptive fuzzy neural control for the synchronization of uncertain chaotic systems with random-varying parameters. Neurocomputing 74:2241–2249
Xia Y (2000) A recurrent neural network for solving linear projection equations. Neural Netw 13:337–350
Xia Y, Leung H, Wang J (2002) A projection neural network and its application to constrained optimization problems. IEEE Trans Circuits Syst-I Regul Pap 49:447–458
Xia Y (2004) Further results on global convergence and stability of globally projected dynamical systems. J Optimiz Theory Appl 122:627–649
Liu Q, Cao J, Xia J (2005) A delayed neural network for solving linear projection equations and its analysis. IEEE Trans Neural Netw 16:834–843
Hu X, Wang J (2006) Solving pseudomonotone variational inequalities and pseudo-convex optimization problems using the projection neural network. IEEE Trans Neural Netw 17:1487–1499
Hu X, Wang J (2007) Degign of general projection neural networks for solving monotone linear variational inequalities and linear and quadratic optimization problems. IEEE Trans Syst Man Cybern Part B Cybern 37:1414–1421
Cheng L, Hou Z, Tan M (2009) Solving linear variational inequalities by projection neural network with time-varying delays. Phys Lett A 373:1739–1743
Cheng L, Hou Z, Tan M (2009) A delayed projection neural network for solving linear variational inequalities. IEEE Trans Neural Netw 20:915–925
Gao X, Liao L (2010) A new one-layer neural network for linear and quadratic programming. IEEE Trans Neural Netw 21:918–929
Liu Q, Cao J, Chen G (2010) A novel recurrent neural network with finite-time convergence for linear programming. Neural Comput 22:2962–2978
Liu Q, Wang J (2011) Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions. IEEE Trans Neural Netw 22:601–613
Liu Q, Guo Z, Wang Z (2012) A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization. Neural Netw 26:99–109
Tao Q, Liu X, Xue MS (2004) A dynamic genetic algorithm based on continuous neural networks for a kind of nonconvex optimization problems. Appl Mathem Comput 150:811–820
Bazaraa MS, Sherali HD, Shetty CM (1993) Nonlinear programming: theory and algorithms, 2nd edn. Wiley, New York
Friesz TL, Bernstein D, Mehta NJ, Tobin RL, Ganjalizadeh S (1994) Day-to-day dynamic network disequilibria and idealized traveler information systems. Operat Res Soc Am 42:1120–1136
Zhang H, Quan Y (2001) Modeling, identification, and control of a class of nonlinear systems. IEEE Trans Fuzzy Syst 9:349–354
Zhang H, Wang Z, Liu D (2008) Global asymptotic stability of recurrent neural networks with multiple time-varying delays. IEEE Trans Neural Netw 19:855–873
Zhang H, Ma T, Huang G (2010) Robust global exponential synchronization of uncertain chaotic delayed neural networks via dual-stage impulsive control. IEEE Trans Syst Man Cybern Part B Cybern 40:831–844
Acknowledgments
This work was supported by the National Natural Science Foundation of China (50977008, 61034005), National Basic Research Program of China (2009CB320601), Science and Technology Research Program of The Education Department of Liaoning Province (LT2010040).
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Huang, B., Zhang, H., Gong, D. et al. A new result for projection neural networks to solve linear variational inequalities and related optimization problems. Neural Comput & Applic 23, 357–362 (2013). https://doi.org/10.1007/s00521-012-0918-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-012-0918-1