Previous work has shown that combinations of separate forecasts produced by
judgment are inferior to those produced by simple averaging. However, in t
hat research judges were not informed of outcomes after producing each comb
ined forecast. Our first experiment shows that when they are given this inf
ormation, they learn to weight the separate forecasts appropriately. Howeve
r, their judgments, though improved, are still not significantly better tha
n the simple average because they contain a random error component. Bootstr
apping can be used to remove this inconsistency and produce results that ou
tperform the average. In our second and third experiments, we provided judg
es with information about errors made by the individual forecasters. Result
s show that providing information about their mean absolute percentage erro
rs updated each period enables judges to combine their forecasts in a way t
hat outperforms the simple average. (C) 1999 Elsevier Science B.V. All righ
ts reserved.