A new gradient-based neural network for solving linear and quadratic programming problems

Citation
Y. Leung et al., A new gradient-based neural network for solving linear and quadratic programming problems, IEEE NEURAL, 12(5), 2001, pp. 1074-1083
Citations number
25
Categorie Soggetti
AI Robotics and Automatic Control
Journal title
IEEE TRANSACTIONS ON NEURAL NETWORKS
ISSN journal
10459227 → ACNP
Volume
12
Issue
5
Year of publication
2001
Pages
1074 - 1083
Database
ISI
SICI code
1045-9227(200109)12:5<1074:ANGNNF>2.0.ZU;2-3
Abstract
In this paper, a new gradient-based neural network is constructed on the ba sis of the duality theory, optimization theory, convex analysis theory, Lya punov stability, theory, and LaSalle invariance principle to solve linear a nd quadratic programming problems. In particular, a new function F(x, y) is introduced into the energy function E(x, y) such that the function E(x, y) is convex and differentiable, and the resulting network is more efficient. This network involves all the relevant necessary and sufficient optimality , conditions for convex quadratic programming problems. For linear programm ing (LP) and quadratic programming (QP) problems with unique and infinite n umber of solutions, we have proven Strictly that for any initial point, eve ry trajectory of the neural network converges to an optimal solution of the QP and its dual problem. The proposed network is different from the existi ng networks which use the penalty method or Lagrange method, and the inequa lity (including nonnegativity) constraints are properly handled. The theory of the proposed network is rigorous and the performance is much better. Th e simulation results also show that the proposed neural network is feasible and efficient.