We evaluated four methods for computing confidence intervals for cost-
effectiveness ratios developed from randomized controlled trials: the
box method, the Taylor series method, the nonparametric bootstrap meth
od and the Fieller theorem method. We performed a Monte Carlo experime
nt to compare these methods. We investigated the relative performance
of each method and assessed whether or not it was affected by differin
g distributions of costs (normal and log normal) and effects (10% abso
lute difference in mortality resulting from mortality rates of 25% ver
sus 15% in the two groups as well as from mortality rates of 55% versu
s 45%) or by differing levels of correlation between the costs and eff
ects (correlations of -0.50, -0.25, 0.0, 0.25 and 0.50). The principal
criterion used to evaluate the performance of the methods was the pro
bability of miscoverage. Symmetrical miscoverage of the intervals was
used as a secondary criterion for evaluating the four methods. Overall
probabilities of miscoverage for the nonparametric bootstrap method a
nd the Fieller theorem method were more accurate than those for the ot
her the methods. The Taylor series method had confidence intervals tha
t asymmetrically underestimated the upper limit of the interval. Confi
dence intervals for cost-effectiveness ratios resulting from the nonpa
rametric bootstrap method and the Fieller theorem method were more dep
endably accurate than those estimated using the Taylor series or box m
ethods. Routine reporting of these intervals will allow individuals us
ing cost-effectiveness ratios to make clinical and policy judgments to
better identify when an intervention is a good value for its cost. (C
) 1997 by John Wiley & Sons, Ltd.