Ejm. Hendriks et al., INTRAOBSERVER AND INTEROBSERVER RELIABILITY OF ASSESSMENTS OF IMPAIRMENTS AND DISABILITIES, Physical therapy, 77(10), 1997, pp. 1097-1106
Background and Purpose. The purpose of this study was to evaluate the
interobserver and intraobserver reliability of assessments of impairme
nts and disabilities. Subjects and Methods. One physical therapist's a
ssessments were examined for intraobserver reliability. Judgments of t
wo pairs of therapists were used to examine interobserver reliability.
Reliability was assessed by Cohen's kappa; Results. Of the 42 impairm
ents and disabilities assessed by the physical therapist in the intrao
bserver reliability study, kappa values could be calculated for 33 ite
ms. For 31 items (94%), kappa values ranged from .40 to .91, and 2 ite
ms (6%) had kappa values of less than .40. To determine interobserver
reliability, 37 items were assessed in one practice. Kappa values coul
d be calculated for 34 items, with 30 items (58%) having kappa values
ranging from .41 to .80 and 4 items (12%) showing ''poor'' agreement.
In the second practice, 47 items were assessed for interobserver relia
bility. Kappa values could be calculated for 40 items, with 11 items (
27.5%) having kappa values ranging from .41 to .54. Poor agreement was
shown for the remaining 29 items (72.5%). Conclusion and Discussion.
Assessments of impairments and disabilities are potentially reliable.
The differences between practices of the interobserver reliability stu
dy can be explained by the fact that one of the therapists did not rec
eive training in the use of the assessment form. More generalizable co
nclusions will require further study with more subjects and therapists
.