1. In a recent review article, the problem of making false-positive inferen
ces as a result of making multiple comparisons between groups of experiment
al units or between experimental outcomes was addressed.
2. It was concluded that the most universally applicable solution was to us
e the Ryan-Holm step-down Bonferroni procedure to control the family-wise (
experiment-wise) type 1 error rate. This procedure consists of adjusting th
e P values resulting from hypothesis testing. It allows for correlation amo
ng hypotheses and has been validated by Monte Carlo simulation. It is a sim
ple procedure and can be performed by hand.
3. However, some investigators prefer to estimate effect sizes and make inf
erences by way of confidence intervals rather than, or in addition to, test
ing hypotheses by way of P values and it is the policy of some editors of b
iomedical journals to insist on this. It is not generally recognized that c
onfidence intervals, like P values, must he adjusted if multiple inferences
are made from confidence intervals in a single experiment.
4. In the present review, it is shown how confidence intervals can be adjus
ted for multiplicity by an extension of the Ryan-Holm step-down Bonferroni
procedure. This can be done for differences between group means in the case
of continuous variables and for odds ratios or relative risks in the case
of categorical variables set out as 2 x 2 tables.