TERMINAL ATTRACTOR ALGORITHMS - A CRITICAL ANALYSIS

Citation
M. Bianchini et al., TERMINAL ATTRACTOR ALGORITHMS - A CRITICAL ANALYSIS, Neurocomputing, 15(1), 1997, pp. 3-13
Citations number
19
Categorie Soggetti
Computer Sciences, Special Topics","Computer Science Artificial Intelligence",Neurosciences
Journal title
ISSN journal
09252312
Volume
15
Issue
1
Year of publication
1997
Pages
3 - 13
Database
ISI
SICI code
0925-2312(1997)15:1<3:TAA-AC>2.0.ZU;2-A
Abstract
One of the fundamental drawbacks of learning by gradient descent techn iques is the susceptibility to local minima during training, Recently, some authors have independently introduced new learning algorithms th at are based on the properties of terminal attractors and repellers, T hese algorithms were claimed to perform global optimization of the cos t in finite time, provided that a null solution exists, In this paper, we prove that, in the case of local minima free error functions, term inal attractor algorithms guarantee that the optimal solution is reach ed in a number of steps that is independent of the cost function. More over, in the case of multimodal functions, we prove that, unfortunatel y, there are no theoretical guarantees that a global solution can be r eached or that the algorithms perform satisfactorily from an operation al point of view, unless particular favourable conditions are satisfie d, On the other hand, the ideas behind these innovative methods are ve ry interesting and deserve further investigations.