This study explains the method of cross validation for assessing forec
ast skill of empirical prediction models. Cross validation provides a
relatively accurate measure of an empirical procedure's ability to pro
duce a useful prediction rule from a historical dataset. The method wo
rks by omitting observations and then measuring ''hindcast'' errors fr
om attempts to predict these missing observations from the remaining d
ata. The idea is to remove the information about the omitted observati
ons that would be unavailable in real forecast situations and determin
e how well the chosen procedure selects prediction rules when such inf
ormation is deleted. The authors examine the methodology of cross vali
dation and its potential pitfalls in practical applications through a
set of examples. The concepts behind cross validation are quite genera
l and need to be considered whenever empirical forecast methods, regar
dless of their sophistication, are employed.