A class of neural networks that solve linear programming problems is analyz
ed, The neural networks considered are modeled bq dynamic gradient systems
that are constructed using a parametric family of exact (nondifferentiable)
penalty functions. It is pro, ed that for a given linear programming probl
em and sufficiently large penalty parameters, any trajectory of the neural
network converges in finite time to its solution set, For the analysis, Lya
punov-type theorems are developed for finite time convergence of nonsmooth
sliding mode dynamic systems to invariant sets, The results are illustrated
via numerical simulation examples.