In this paper, we present 3 different neural network-based methods to
perform variable selection. OCD - Optimal Cell Damage - is a pruning m
ethod, which evaluates the usefulness of a variable and prunes the lea
st useful ones (it is related to the Optimal Brain Damage method of Le
Cun et al.). Regularization theory proposes to constrain estimators b
y adding a term to the cost function used to train a neural network. I
n the Bayesian framework, this additional term can be interpreted as t
he log prior to the weights distribution. We propose to use two priors
(a Gaussian and a Gaussian mixture) and show that this regularization
approach allows to select efficient subsets of variables. Our methods
are compared to conventional statistical selection procedures and are
shown to significantly improve on that.