Optimization of perceptron neural network classifiers requires an optimizat
ion algorithm that is robust. In general, the best network is selected afte
r a number of optimization trials. An effective optimization algorithm gene
rates good weight-vector solutions in a few optimization trial runs owing t
o its inherent ability to escape local minima, where a less effective algor
ithm requires a larger number of trial runs. Repetitive training and testin
g is a tedious process, so that an effective algorithm is desirable to redu
ce training time and increase the quality of the set of available weight-ve
ctor solutions. We present leap-frog as a robust optimization algorithm for
training neural networks. In this paper the dynamic principles of leap-fro
g are described together with experiments to show the ability of leap-frog
to generate reliable weight-vector solutions. Performance histograms are us
ed to compare leap-frog with a variable-metric method, a conjugate-gradient
method with modified restarts, and a constrained-momentum-based algorithm.
Results indicate that leap-frog performs better in terms of classification
error than the remaining three algorithms on two distinctly different test
problems.