It has long been observed, and frequently noted, by connectionists tha
t small changes in initial conditions, prior to training, can result i
n networks that generalize very differently. We have performed a syste
matic study of this phenomenon, using a number of different statistica
l measures of generalization differences. From these we derive a forma
l definition of Generalization Diversity. We quantify the relative imp
acts on generalization of the major parameters used in network initial
ization as well as extend the formal framework to also encompass the d
ifferences in generalization difference from one parameter to another.
We reveal, for example, the relative effects of random initialization
of the link weights and variation of the number of hidden units, and
how similar these two resultant effects are. Finally, examples are pre
sented of how the proposed generalization diversity measure may be exp
loited in order to improve the performance of neural-net systems. We s
how how several of these measures can be used to engineer reliability
improvements in neural-net systems.