MINIMUM-SEEKING PROPERTIES OF ANALOG NEURAL NETWORKS WITH MULTILINEAROBJECTIVE FUNCTIONS

Authors
Citation
M. Vidyasagar, MINIMUM-SEEKING PROPERTIES OF ANALOG NEURAL NETWORKS WITH MULTILINEAROBJECTIVE FUNCTIONS, IEEE transactions on automatic control, 40(8), 1995, pp. 1359-1375
Citations number
28
Categorie Soggetti
Controlo Theory & Cybernetics","Robotics & Automatic Control","Engineering, Eletrical & Electronic
ISSN journal
00189286
Volume
40
Issue
8
Year of publication
1995
Pages
1359 - 1375
Database
ISI
SICI code
0018-9286(1995)40:8<1359:MPOANN>2.0.ZU;2-#
Abstract
In this paper, we study the problem of minimzing a multilinear objecti ve function over the discrete set {0,1}(n). This is an extension of an earlier work addressed to the problem of minimizing a quadratic funct ion over {0,1}(n). A gradient-type neural network is proposed to perfo rm the optimization, A novel feature of the network is the introductio n of a so-called bias vector, The network is operated in the high-gain region of the sigmoidal nonlinearities, The following comprehensive t heorem is proved: For all sufficiently small bias vectors except those belonging to a set of measure zero, for all sufficiently large sigmoi dal gains, for all initial conditions except those belonging to a nowh ere dense set, the state of the network converges to a local minimum o f the objective function, This is a considerable generalization of ear lier results for quadratic objective functions, Moreover, the proofs h ere are completely rigorous, The neural network-based approach to opti mization is briefly compared to the so-called interior-point methods o f nonlinear programming, as exemplified by Karmarkar's algorithm, Some problems for future research are suggested.