Stability analysis of gradient-based neural networks for optimization problems

Citation
Qm. Han et al., Stability analysis of gradient-based neural networks for optimization problems, J GLOB OPT, 19(4), 2001, pp. 363-381
Citations number
34
Categorie Soggetti
Engineering Mathematics
Journal title
JOURNAL OF GLOBAL OPTIMIZATION
ISSN journal
09255001 → ACNP
Volume
19
Issue
4
Year of publication
2001
Pages
363 - 381
Database
ISI
SICI code
0925-5001(200104)19:4<363:SAOGNN>2.0.ZU;2-M
Abstract
The paper introduces a new approach to analyze the stability of neural netw ork models without using any Lyapunov function. With the new approach, we i nvestigate the stability properties of the general gradient-based neural ne twork model for optimization problems. Our discussion includes both isolate d equilibrium points and connected equilibrium sets which could be unbounde d. For a general optimization problem, if the objective function is bounded below and its gradient is Lipschitz continuous, we prove that (a) any traj ectory of the gradient-based neural network converges to an equilibrium poi nt, and (b) the Lyapunov stability is equivalent to the asymptotical stabil ity in the gradient-based neural networks. For a convex optimization proble m, under the same assumptions, we show that any trajectory of gradient-base d neural networks will converge to an asymptotically stable equilibrium poi nt of the neural networks. For a general nonlinear objective function, we p ropose a refined gradient-based neural network, whose trajectory with any a rbitrary initial point will converge to an equilibrium point, which satisfi es the second order necessary optimality conditions for optimization proble ms. Promising simulation results of a refined gradient-based neural network on some problems are also reported.