Training feedforward networks, layer-by-layer, with the Levenberg-Marquardt
backpropagation algorithm is presented in this paper. The Levenberg-Marqua
rdt backpropagation technique has been noted as an efficient method for tra
ining feedforward neural networks in terms of training accuracy, convergenc
e properties and overall training time. We introduce a method to further im
prove the computation and memory complexity of this algorithm by modifying
the weights layer-by-layer. Four examples from the literature and from an e
ngineering application are provided to demonstrate the ourperformance of th
e technique over the general Levenberg-Marquardt backpropagation, which is
based on adjusting all the weights simultaneously. These examples show that
further improvement, in both the training time and convergence property, c
an be obtained using the new approach.