This article introduces the concept of optimally distributed computati
on in feedforward neural networks via regularization of weight salienc
y. By constraining the relative importance of the parameters, computat
ion can be distributed thinly and evenly throughout the network. We pr
opose that this will have beneficial effects on fault-tolerance perfor
mance and generalization ability in large network architectures. These
theoretical predictions are verified by simulation experiments on two
problems: one artificial and the other a real-world task. In summary,
this article presents regularization terms for distributing neural co
mputation optimally.