STUDY OF INTEROBSERVER RELIABILITY IN CLINICAL-ASSESSMENT OF RSV LOWER RESPIRATORY ILLNESS - A PEDIATRIC INVESTIGATORS COLLABORATIVE NETWORK FOR INFECTIONS IN CANADA (PICNIC) STUDY
Eel. Wang et al., STUDY OF INTEROBSERVER RELIABILITY IN CLINICAL-ASSESSMENT OF RSV LOWER RESPIRATORY ILLNESS - A PEDIATRIC INVESTIGATORS COLLABORATIVE NETWORK FOR INFECTIONS IN CANADA (PICNIC) STUDY, Pediatric pulmonology, 22(1), 1996, pp. 23-27
Randomized trials of ribavirin therapy have used clinical scores to as
sess illness severity. Little information on agreement for these findi
ngs between observers has been published. We decided to determine inte
robserver agreement for (1) a history for apnea or respiratory failure
; (2) assessment of cyanosis, respiratory rate, retractions, and oxime
try; and (3) determination of reason for hospitalization (requirement
for medications, supportive care, underlying illness, poor home enviro
nment). At eight centers 137 RSV-infected patients were assessed by tw
o observers blinded to the assessments by others with no interventions
made between assessments. Observations were categorized, and agreemen
t was summarized as percentage of observed agreement, Pearson correlat
ion, or as a kappa statistic. Observed agreement for a history of eith
er apnea or a respiratory arrest was at least 90% at all centers, with
seven of the eight centers in total agreement. At all centers except
one, the agreement on the reason why the patient remained in hospital
was at least 80%. The observed agreement for assessing cyanosis was at
least 94% at all eight centers. The correlation coefficient for respi
ratory rate varied from 0.42 to 0.97 across centers. The kappa values
for agreement beyond chance for retractions varied from 0.05 to 1.00.
The kappa values for oxygen saturation measures varied from 0.31 to 0.
70. Although not statistically significant, there appeared to be more
variation as the time between assessments increased. In conclusion, ag
reement for historical findings and assessment of cyanosis was high. H
owever, there was wide variation in agreement in the other assessments
. Training to ensure consistent and reproducible assessment by differe
nt examiners will be necessary if these findings are to be used as out
come variables in clinical trials. (C) 1996 Wiley-Liss, Inc.