Sounds from the temporomandibular joint were recorded on audiotape fro
m 238 individuals by placing microphones in both ears. The recordings
were later digitized at a sample rate of 1.7 kHz with 10-bit resolutio
n and stored on computer disk. At least two open-close cycles were ass
essed from each individual; 2707 different individual sounds were anal
ysed in the time and frequency domains. The sounds were classified as:
(a) single, short duration (clicks), (b) multiple, short-duration (cr
eaks) and (c) long duration (crepitus). The sounds were further subcla
ssified into either high or low amplitude by (i) the attack, which pro
duced hard and soft categories and (ii) comparing the amplitude betwee
n sides-bilateral sounds were those with amplitudes differing by <40 m
V; the rest were unilateral. To establish the robustness of the classi
fication 42 acoustic events were selected to be classified visually by
three observers on two separate occasions. Intraobserver agreement wa
s 82% (kappa = 0.75) while interobserver agreement was 60% (kappa = 0.
71). Statistically significant differences were noted between all clas
sifications of sound. These were most marked in the time domain. A sim
ple, automated classification scheme was devised that was capable of c
ategorizing the sounds with 82% agreement (kappa = 0.71) compared to a
human observer. Copyright (C) 1996 Elsevier Science Ltd.