H. Gotanda et al., EFFECTS OF THE POLARITY OF NEURAL-NETWORK UNITS ON BACKPROPAGATION LEARNING, Systems and computers in Japan, 27(14), 1996, pp. 55-67
Citations number
6
Categorie Soggetti
Computer Science Hardware & Architecture","Computer Science Information Systems","Computer Science Theory & Methods
This paper considers the neural network in which the initial values fo
r the weights and the bias are given by random numbers as in usual cas
es. The results of BP learning in networks composed of unipolar units
having an activity range from 0 to 1 and networks with bipolar units w
ith a range from -0.5 to 0.5 are compared. When the input space is lar
ge, the separation hyperplane at the outset of learning passes near th
e center of the input space in the bipolar case, while that in the uni
polar case passes near the vertex. Because of this property, the numbe
r of separation hyperplanes that effectively separate the input spaces
of the layers during the updating or realization of the solution is l
arger in the bipolar case than in the unipolar case. The difference be
tween the two becomes more remarkable with the increase of size. As a
result of simulation, it is verified that the learning by the bipolar
network gives better convergence for a wider range of initial values t
han the learning by the unipolar network when the network is large. It
is shown also that the kinds of solution obtained by the unipolar net
work tend to be deviated.