STEEPEST DESCENT METHODS WITH GENERALIZED DISTANCES FOR CONSTRAINED OPTIMIZATION

Authors
Citation
An. Iusem, STEEPEST DESCENT METHODS WITH GENERALIZED DISTANCES FOR CONSTRAINED OPTIMIZATION, Acta applicandae mathematicae, 46(2), 1997, pp. 225-246
Citations number
18
Categorie Soggetti
Mathematics,Mathematics
ISSN journal
01678019
Volume
46
Issue
2
Year of publication
1997
Pages
225 - 246
Database
ISI
SICI code
0167-8019(1997)46:2<225:SDMWGD>2.0.ZU;2-1
Abstract
We consider the problem min f(x) s.t. x is an element of C, where C is a closed and cover subset of R-n with nonempty interior, and introduc e a family of interior point methods for this problem, which can be se en as approximate versions of generalized proximal point methods. Each step consists of a one-dimensional search along either a curve or a s egment in the interior of C. The information about the boundary of C i s contained in a generalized distance which defines the segment of the curve, and whose gradient diverges at the boundary of C. The objectiv e of the search is either f or f plus a regularizing term. When C = R- n, the usual steepest descent method is a particular case of our gener al scheme, and we manage to extend known convergence results for the s teepest descent method to our family: for nonregularized one-dimension al searches, under a level set boundedness assumption on f, the sequen ce is bounded, the difference between consecutive iterates converges t o 0 and every cluster point of the sequence satisfies first-order opti mality conditions for the problem, i.e. is a solution if f is convex. For the regularized search and convex f, no boundedness condition on f is needed and full and global convergence of the sequence to a soluti on of the problem is established.