National weather services now receive global model forecasts from a nu
mber of centers around the world. The existence of these forecasts rai
ses the general question of how the operational forecaster can best us
e the information that the ensemble of predictions provides. The Austr
alian Bureau of Meteorology receives four global model forecasts in re
al-time, but at present their performance is evaluated almost entirely
in a subjective manner. In this study, in addition to the standard ob
jective measures (for example, bias and rms error), several alternativ
e objective measures of model performance are calculated (such as the
temporal forecast consistency of a given model and divergence between
different models), in an attempt to provide the forecasters with more
effective tools for model assessment. Both kinds of measures are appli
ed to a two-year dataset (October 1989 to September 1991) of daily sea
level pressure predictions from the four models. There are two main o
utcomes of this study. First, the current subjective system of ranking
the various models has been augmented with more objectively based per
formance measures. Second, these performance statistics provide guidan
ce to the operational forecasters in a number of ways: geographical re
gions with large systematic errors can be identified for each model; c
ase studies are presented that illustrate the utility of the regional
maps of bias, consistency, and divergence computed in this study; and,
finally, there are regions of uncertainty where no model is consisten
tly superior, so forecasts over these regions should be treated with c
aution.