IMPROVEMENTS ON CROSS-VALIDATION - THE .632-BOOTSTRAP METHOD()

Citation
B. Efron et R. Tibshirani, IMPROVEMENTS ON CROSS-VALIDATION - THE .632-BOOTSTRAP METHOD(), Journal of the American Statistical Association, 92(438), 1997, pp. 548-560
Citations number
26
Categorie Soggetti
Statistic & Probability","Statistic & Probability
Volume
92
Issue
438
Year of publication
1997
Pages
548 - 560
Database
ISI
SICI code
Abstract
A training set of data has been used to construct a rule for predictin g future responses. What is the error rate of this rule? This is an im portant question both for comparing models and for assessing a final s elected model. The traditional answer to this question is given by cro ss-validation. The cross-validation estimate of prediction error is ne arly unbiased but can be highly variable. Here we discuss bootstrap es timates of prediction error, which can be thought of as smoothed versi ons of cross-validation. We show that a particular bootstrap method th e .632+ rule, substantially outperforms cross-validation in a catalog of 24 simulation experiments. Besides providing point estimates, we al so consider estimating the variability of an error rate estimate. All of the results here are nonparametric and apply to any possible predic tion rule; however, we study only classification problems with 0-1 los s in detail. Our simulations include ''smooth'' prediction rules Like Fisher's linear discriminant function and unsmooth ones like nearest n eighbors.