Among several models of neurons and their interconnections, feedforwar
d artificial neural networks (FFANN's) are most popular, because of th
eir simplicity and effectiveness. Some obstacles, however, are yet to
be cleared to make them truly reliable-smart information processing sy
stems, Difficulties such as long learning time and local minima may no
t affect FFANN's as much as the question of generalization ability, be
cause a network needs only one training, and then it may be used for a
long time, The question of generalization ability of ANN's, however,
is of great interest for both theoretical understanding and practical
use, This paper reports our observations about randomness in generaliz
ation ability of FFANN's, A novel method for measuring generalization
ability is defined, This method can be used to identify degree of rand
omness in generalization ability of learning systems, If an FFANN arch
itecture shows randomness in generalization ability for a given proble
m, multiple networks can be used to improve it, We have developed a mo
del, called voting model, for predicting generalization ability of mul
tiple networks, It has been shown that if correct classification proba
bility of a single network is greater than half, then as the number of
networks in a voting network is increased so does its generalization
ability, Further analysis has shown that VC-dimension of the voting ne
twork model may increase monotonically as the number of networks in th
e voting network is increased, This result is counter intuitive, since
it is generally believed that the smaller the VC-dimension, the bette
r the generalization ability.