Architecture selection is a very important aspect in the design of neural n
etworks (NNs) to optimally tune performance and computational complexity. S
ensitivity analysis has been used successfully to prune irrelevant paramete
rs from feedforward NNs. This paper presents a new pruning algorithm that u
ses sensitivity analysis to quantify the relevance of input and hidden unit
s. A new statistical pruning heuristic is proposed, based on variance analy
sis, to decide which units to prune. The basic idea is that a parameter wit
h a variance in sensitivity not significantly different from zero, is irrel
evant and can be removed. Experimental results show that the new pruning al
gorithm correctly prunes irrelevant input and hidden units. The new pruning
algorithm is also compared with standard pruning algorithms.