In pattern recognition tasks, we usually do not pay much attention to
the arbitrarily chosen training set of a pattern classifier beforehand
. This correspondence proposes several methods for pruning data sets b
ased upon graph theory in order to alleviate redundancy in the origina
l data set while retaining the original data structure as far as possi
ble. The proposed methods are applied to the training sets for pattern
recognition by a multilayered perceptron neural network (MLP-NN) and
the locations of the centroids of a radial basis function neural netwo
rk (RBF-NN). The advantage of the proposed graph theoretic methods is
that they do not require any calculation for the statistical distribut
ions of the clusters. The experimental results in comparison both with
the k-means clustering and with the learning vector quantization (LVQ
) methods show that the proposed methods give encouraging performance
in terms of computation for data classification tasks.