A new method is proposed for comparing all predictors in a multiple regress
ion model. This method generates a measure of predictor criticality, which
is distinct from and has several advantages over traditional indices of pre
dictor importance.
Using the bootstrapping (resampling with replacement) procedure, a large nu
mber of samples are obtained from a given data set which contains one respo
nse variable and p predictors. For each sample, all 2(p) - 1 subset regress
ion models are fitted and the best subset model is selected. Thus, the (mul
tinomial) distribution of the probability that each of the 2(p) - 1 subsets
is 'the best' model for the data set is obtained.
A predictor's criticality is defined as a function of the probabilities ass
ociated with the models that include the predictor. That is, a predictor wh
ich is included in a large number of probable models is critical to the ide
ntification of the best-fitting regression model and, therefore, to the pre
diction of the response variable.
The procedure can be applied to fixed and random regression models and can
use any measure of goodness of fit (e.g., adjusted R-2, C-p, AIC) for ident
ifying the best model. Several criticality measures can be defined by using
different combinations of the probabilities of the best-fitting models, an
d asymptotic confidence intervals for each variable's criticality can be de
rived. The procedure is illustrated with several examples.