The paper introduces a new approach to analyze the stability of neural netw
ork models without using any Lyapunov function. With the new approach, we i
nvestigate the stability properties of the general gradient-based neural ne
twork model for optimization problems. Our discussion includes both isolate
d equilibrium points and connected equilibrium sets which could be unbounde
d. For a general optimization problem, if the objective function is bounded
below and its gradient is Lipschitz continuous, we prove that (a) any traj
ectory of the gradient-based neural network converges to an equilibrium poi
nt, and (b) the Lyapunov stability is equivalent to the asymptotical stabil
ity in the gradient-based neural networks. For a convex optimization proble
m, under the same assumptions, we show that any trajectory of gradient-base
d neural networks will converge to an asymptotically stable equilibrium poi
nt of the neural networks. For a general nonlinear objective function, we p
ropose a refined gradient-based neural network, whose trajectory with any a
rbitrary initial point will converge to an equilibrium point, which satisfi
es the second order necessary optimality conditions for optimization proble
ms. Promising simulation results of a refined gradient-based neural network
on some problems are also reported.