C. Giguere et al., AUTOMATIC SPEECH RECOGNITION EXPERIMENTS WITH A MODEL OF NORMAL AND IMPAIRED PERIPHERAL HEARING, Acustica, 83(6), 1997, pp. 1065-1076
Automatic speech recognition experiments were carried out using a mode
l of normal and impaired peripheral hearing as a front-end preprocesso
r to a neural-network recognition stage trained and tested over the TI
MIT speech database. The simulation of a flat mild/moderate sensorineu
ral hearing loss led to a significant decrease in recognition performa
nce compared to a simulation of normal hearing. Analyses of the confus
ion matrices using multidimensional scaling techniques showed that the
decrements in scores were not associated with significant changes in
the pattern of phoneme confusions. Consonant recognition was dominated
by the features manner and place of articulation, but the features so
nority, frication, voicing, and sibilance could also be detected. Vowe
l recognition was dominated by the first two formant frequencies. The
results are in broad agreement with the speech perception data for nor
mal and hearing-impaired listeners for the type of audiometric configu
ration simulated. The main discrepancy between the system and human da
ta is the significantly lower recognition performance found for vowels
, particularly when simulating normal hearing.