The most commonly used measures for verifying forecasts or simulations
of continuous variables are root-mean-squared error (rmse) and anomal
y correlation. Some disadvantages of these measures are demonstrated.
Existing assessment systems for categorical forecasts are discussed br
iefly. An alternative unbiased verification measure is developed, know
n as the linear error in probability space (LEPS) score. The LEPS scor
e may be used to assess forecasts of both continuous and categorical v
ariables and has some advantages over rmse and anomaly correlation. Th
e properties of the version of LEPS discussed here are reviewed and co
mpared with an earlier form of LEPS. A skill-score version of LEPS may
be used to obtain an overall measure of the skill of a number of fore
casts. This skill score is biased, but the bias is negligible if the n
umber of effectively independent forecasts or simulations is large. So
me examples are given in which the LEPS skill score is compared with r
mse and anomaly correlation.