Proper initialization is one of the most important prerequisites for f
ast convergence of feedforward neural networks like high-order and mul
tilayer perceptrons. This publication aims at determining the optimal
variance (or range) for the initial weights and biases, which is the p
rincipal parameter of random initialization methods for both types of
neural networks, An overview of random weight initialization methods f
or multilayer perceptrons is presented, These methods are extensively
tested using eight real-world benchmark data sets and a broad range of
initial weight variances by means of more than 30 000 simulations, in
the aim to find the best weight initialization method for multilayer
perceptrons. For high-order networks, a large number of experiments (m
ore than 200 000 simulations) was performed, using three weight distri
butions, three activation functions, several network orders, and the s
ame eight data sets, The results of these experiments are compared to
weight initialization techniques for multilayer perceptrons, which lea
ds to the proposal of a suitable initialization method for high-order
perceptrons. The conclusions on the initialization methods for both ty
pes of networks are justified by sufficiently small confidence interva
ls of the mean convergence times.