Classifiers built on small training sets are usually biased or unstabl
e. Different techniques exist to construct more stable classifiers. It
is not clear which ones are good, and whether they really stabilize t
he classifier or just improve the performance. In this paper bagging (
bootstrapping and aggregating) [L. Breiman, Bagging predictors, Machin
e Learning J. 24(2), 123-140(1996)] is studied for a number of linear
classifiers. A measure for the instability of classifiers is introduce
d. The influence of regularization and bagging on this instability and
the generalization error of linear classifiers is investigated. In a
simulation study it is shown that in general bagging is not a stabiliz
ing technique. It is also demonstrated that one can consider the insta
bility of the classifier to predict how useful bagging will be. Finall
y, it is shown experimentally that bagging might improve the performan
ce of the classifier only for very unstable situations. (C) 1998 Patte
rn Recognition Society. Published by Elsevier Science Ltd. All rights
reserved.