Inspired by the recent upsurge of interest in Bayesian methods we cons
ider adaptive regularization. A generalization based scheme for adapta
tion of regularization parameters is introduced and compared to Bayesi
an regularization. We show that pruning arises naturally within both a
daptive regularization schemes. As model example we have chosen the si
mplest possible: estimating the mean of a random variable with known v
ariance. Marked similarities are found between the two methods in that
they both involve a ''noise limit,'' below which they regularize with
infinite weight decay, i.e., they prune. However, pruning is not alwa
ys beneficial. We show explicitly that both methods in some cases may
increase the generalization error. This corresponds to situations wher
e the underlying assumptions of the regularizer are poorly matched to
the environment.