Reproducibility of peer review in clinical neuroscience - Is agreement between reviewers any greater than would be expected by chance alone?

Citation
Pm. Rothwell et Cn. Martyn, Reproducibility of peer review in clinical neuroscience - Is agreement between reviewers any greater than would be expected by chance alone?, BRAIN, 123, 2000, pp. 1964-1969
Citations number
28
Categorie Soggetti
Neurology,"Neurosciences & Behavoir
Journal title
BRAIN
ISSN journal
00068950 → ACNP
Volume
123
Year of publication
2000
Part
9
Pages
1964 - 1969
Database
ISI
SICI code
0006-8950(200009)123:<1964:ROPRIC>2.0.ZU;2-Y
Abstract
We aimed to determine the reproducibility of assessments made by independen t reviewers of papers submitted for publication to clinical neuroscience jo urnals and abstracts submitted for presentation at clinical neuroscience co nferences, We studied two journals in which manuscripts were routinely asse ssed by two reviewers, and two conferences in which abstracts were routinel y scored by multiple reviewers. Agreement between the reviewers as to wheth er manuscripts should be accepted, revised or rejected was not significantl y greater than that expected by chance [kappa = 0.08, 95% confidence interv al (CI) -0.04 to -0.20] for 179 consecutive papers submitted to Journal A, and was poor (kappa = 0.28, 0.12 to 0.40) for 116 papers submitted to Journ al B, However, editors were very much more likely to publish papers when bo th reviewers recommended acceptance than when they disagreed or recommended rejection (Journal A, odds ratio = 73, 95% CI = 27 to 200; Journal B, 51, 17 to 155), There was little or no agreement between the reviewers as to th e priority (low, medium, or high) for publication (Journal A, kappa = -0.12 , 95% CI -0.30 to -0.11; Journal B, kappa = 0.27, 0.01 to 0.53), Abstracts submitted for presentation at the conferences were given a score of 1 (poor ) to 6 (excellent) by multiple independent reviewers. For each conference, analysis of variance of the scores given to abstracts revealed that differe nces between individual abstracts accounted for only 10-20% of the total va riance of the scores, Thus, although recommendations made by reviewers have considerable influence on the fate of both papers submitted to journals an d abstracts submitted to conferences, agreement between reviewers in clinic al neuroscience was little greater than would be expected by chance alone.