Research on improving the performance of feedforward neural networks has co
ncentrated mostly on the optimal setting of initial weights and learning pa
rameters, sophisticated optimization techniques, architecture optimization,
and adaptive activation functions. An alternative approach is presented in
this paper where the neural network dynamically selects training patterns
from a candidate training set during training, using the network's current
attained knowledge about the target concept. Sensitivity analysis of the ne
ural network output with respect to small input perturbations is used to qu
antify the informativeness of candidate patterns. Only the most informative
patterns, which are those patterns closest to decision boundaries, are sel
ected for training. Experimental results show a significant reduction in th
e training set size, without negatively influencing generalization performa
nce and convergence characteristics. This approach to selective learning is
then compared to an alternative where informativeness is measured as the m
agnitude in prediction error.