In modern day pattern recognition, neural nets are used extensively. G
eneral use of a feedforward neural net consists of a training phase fo
llowed by a classification phase. Classification of an unknown test ve
ctor is very fast and only consists of the propagation of the test vec
tor through the neural net. Training involves an optimization procedur
e and is very time consuming since a feasible local minimum is sought
in weight space. If the training algorithm is based on error backpropa
gation the optimization procedure consists of the following steps: com
putation of the activation of the net when all the training examples a
re presented to it; computation of an error function based on the acti
vation; computation of the gradients at a point in weight space; and f
inally, the adaptation of the weight values of the net. In this paper
we present an analysis of a parallel implementation of the backpropaga
tion algorithm using conjugate-gradient optimization for a three-layer
ed, feedforward neural network, using networked workstations as a virt
ual parallel machine. The instance of the virtual machine is the PVM s
ystem, developed at Oak Ridge National Laboratory. We compare the over
all performance of the parallel machine with averaged sequential runs
in a typical research environment. From this, we identify the general
requirements such as the size of the data set and neural net which ren
der the parallel implementation useful, compared with the sequential e
xecution of the same neural net training procedure.