Assessing inter-rater reliability, whereby data are independently code
d and the codings compared for agreement, is a recognised process in q
uantitative research. However, its applicability to qualitative resear
ch is less clear: should researchers be expected to identify the same
codes or themes in a transcript or should they be expected to produce
different accounts! Some qualitative researchers argue that assessing
inter-rater reliability is an important method for ensuring rigour, ot
hers that it is unimportant; and yet it has never been formally examin
ed in an empirical qualitative study. Accordingly, to explore the degr
ee of inter-rater reliability that might be expected, six researchers
were asked to identify themes in the same focus group transcript. The
results showed close agreement on the basic themes but each analyst 'p
ackaged' the themes differently.