For a variety of reasons, the relative impacts of neural-net inputs on the
output of a network's computation is valuable information to obtain. In par
ticular, it is desirable to identify the significant features, or inputs, o
f a data-defined problem before the data is sufficiently, preprocessed to e
nable high performance neural-net training. We have defined and rested a te
chnique for assessing such input impacts, which,will be compared with a met
hod described ill a paper published earlier in this journal. The new approa
ch, known as the 'clamping' technique, offers efficient impact assessment o
f the input features of the problem. Results of the clamping technique prov
e to be robust under a variety of different network configurations. Differe
nces in architecture, training parameter values and subsets of the data all
deliver much the same impact rankings, which supports the notion that the
technique ranks an inherent property, of the available data rather than a p
roperty of any particular feedforward neural network. The success, stabilit
y and efficiency of the clamping technique are shown to hold for a number o
f different real-world problems. In addition, we subject the previously pub
lished technique, which we will call the 'weight product' technique, to the
same tests in order to provide directly comparable information.