Publication and related bias in meta-analysis: Power of statistical tests and prevalence in the literature

Citation
Jac. Sterne et al., Publication and related bias in meta-analysis: Power of statistical tests and prevalence in the literature, J CLIN EPID, 53(11), 2000, pp. 1119-1129
Citations number
40
Categorie Soggetti
Envirnomentale Medicine & Public Health","Medical Research General Topics
Journal title
JOURNAL OF CLINICAL EPIDEMIOLOGY
ISSN journal
08954356 → ACNP
Volume
53
Issue
11
Year of publication
2000
Pages
1119 - 1129
Database
ISI
SICI code
0895-4356(200011)53:11<1119:PARBIM>2.0.ZU;2-W
Abstract
Publication and selection biases in meta-analysis are more likely to affect small studies, which also tend to beef lower methodological quality. This may lead to "small-study effects," where the smaller studies in a meta-anal ysis show larger treatment effects. Small-study effects may also arise beca use of between-trial heterogeneity. Statistical tests for small-study effec ts have been proposed, but their validity has been questioned. A set of typ ical meta-analyses containing 5, 10, 20, and 30 trials was defined based on the characteristics of 78 published meta-analyses identified in a hand sea rch of eight journals from 1993 to 1997. Simulations were performed to asse ss the power of a weighted regression method and a rank correlation test in the presence of no bias, moderate bias or severe bias. We based evidence o f small-study effects on P < 0.1. The power to detect bias increased with i ncreasing numbers of trials. The rank correlation test was less powerful th an the regression method. For example, assuming a control group event rate of 20% and no treatment effect, moderate bias was detected with the regress ion test in 13.7%, 23.5%, 40.1% and 51.6% of meta-analyses with 5, 10, 20 a nd 30 trials. The corresponding figures for the correlation test were 8.5%, 14.7%, 20.4% and 26.0%, respectively. Severe bias was detected with the re gression method in 23.5%, 56.1%, 88.3% and 95.9% of meta-anlyses with 5, 10 , 20 and 30 trials; as compared to 11.9%, 31.1%, 45.3% and 65.4% with the c orrelation test. Similar results were obtained in simulations incorporating moderate treatment effects. However the regression method gave false-posit ive rates which were too high in some situations (large treatment effects, or few events per trial, or all trials of similar sizes). Using the regress ion method, evidence of small-study effects was present in 21 (26.9%) of th e 78 published meta-analyses. Tests for small-study effects should routinel y be performed in mete-analysis. Their power is however limited, particular ly for moderate amounts of bias or meta-analyses based on a small number of small studies. When evidence of small-study effects is found, cartful cons ideration should be given to possible explanations for these in the reporti ng of the meta-analysis. (C) 2000 Elsevier Science Inc. All rights reserved .