One of the fundamental drawbacks of learning by gradient descent techn
iques is the susceptibility to local minima during training, Recently,
some authors have independently introduced new learning algorithms th
at are based on the properties of terminal attractors and repellers, T
hese algorithms were claimed to perform global optimization of the cos
t in finite time, provided that a null solution exists, In this paper,
we prove that, in the case of local minima free error functions, term
inal attractor algorithms guarantee that the optimal solution is reach
ed in a number of steps that is independent of the cost function. More
over, in the case of multimodal functions, we prove that, unfortunatel
y, there are no theoretical guarantees that a global solution can be r
eached or that the algorithms perform satisfactorily from an operation
al point of view, unless particular favourable conditions are satisfie
d, On the other hand, the ideas behind these innovative methods are ve
ry interesting and deserve further investigations.