Mesoscale model gridpoint temperature data from simulations in the sou
thwestern United States during the summer of 1990 are compared with bo
th observations and statistical guidance from large-scale models over
a 32-day period. Although the raw model temperature data at the lowest
sigma level typically are much lower than observed, when the mean tem
perature bias is removed, the model values of high temperature compare
favorably with both observations and operational statistical guidance
products. A simple 7-day running mean bias calculation that could be
used in an operational environment is tested and also found to produce
good results. These comparisons suggest that the ability of mesoscale
model gridpoint data to produce useful and accurate forecast products
through the use of very simple bias corrections should be explored fu
lly as mesoscale model data become routinely available.