The concept is introduced of 'optimally distributed computation' in feed-fo
rward neural networks via regularisation of weight saliency. By constrainin
g the relative importance of the parameters, computation can be distributed
thinly and evenly throughout the network. It is proposed that this will ha
ve beneficial effects on fault-tolerance performance and generalisation abi
lity in augmented network architectures. These theoretical predictions are
verified by simulation experiments on two problems; one artificial and the
other a 'real-world' task. Regularisation terms are presented for distribut
ing neural computation optimally.