In [2], a parallel perceptron learning algorithm on the single-channel
broadcast communication model was proposed to speed up the learning o
f weights of perceptrons [3]. The results in [2] showed that given n t
raining examples, the average speedup is 1.48n0.91/log n by n process
ors. Here, we explain how the parallelization may be modified so that
it is applicable to any number of processors. Both analytical and expe
rimental results show that the average speedup can reach nearly O(r) b
y r processors if r is much less than n.