An efficient implementation of a quasi-Newton algorithm for training f
eed-forward neural network on a Gray Y-MP is presented. The most time-
consuming step of a neural network training using the quasi-Newton alg
orithm is the computation of the error function and its gradient. Para
llelization embedded in these computations can be exploited through ve
ctorization in a Gray Y-MP supercomputer. We show how they can be carr
ied out such that the overall performance of the neural network traini
ng process can be enhanced substantially.