In this paper we propose a scheme for mapping two important artificial neur
al network (ANN) models on the popular k-ary n-cube parallel architectures
(KNCs). The scheme is based on generalizing the mapping of a bipartite grap
h onto the KNC architecture and thus can be adapted to any model whose comp
utations can be represented by a bipartite task graph. Our approach is the
first to adjust the granularity of parallelism so as to achieve the best po
ssible performance based on properties of the computational model and the t
arget architecture. We first introduce a methodology for optimal implementa
tion of multi-layer feedforward artificial neural networks (FFANNs) trained
with the backpropagation algorithm on KNCs. We prove that our mapping meth
odology is time-optimal and that it provides for maximum processor utilizat
ion regardless of the structure of the FFANN. We show that the same methodo
logy can be utilized for efficient mapping of Radial Basis Function neural
networks (RBFs) on KNCs.