AUTOMATIC SPEECH RECOGNITION EXPERIMENTS WITH A MODEL OF NORMAL AND IMPAIRED PERIPHERAL HEARING

Citation
C. Giguere et al., AUTOMATIC SPEECH RECOGNITION EXPERIMENTS WITH A MODEL OF NORMAL AND IMPAIRED PERIPHERAL HEARING, Acustica, 83(6), 1997, pp. 1065-1076
Citations number
27
Categorie Soggetti
Acoustics
Journal title
ISSN journal
14367947
Volume
83
Issue
6
Year of publication
1997
Pages
1065 - 1076
Database
ISI
SICI code
1436-7947(1997)83:6<1065:ASREWA>2.0.ZU;2-C
Abstract
Automatic speech recognition experiments were carried out using a mode l of normal and impaired peripheral hearing as a front-end preprocesso r to a neural-network recognition stage trained and tested over the TI MIT speech database. The simulation of a flat mild/moderate sensorineu ral hearing loss led to a significant decrease in recognition performa nce compared to a simulation of normal hearing. Analyses of the confus ion matrices using multidimensional scaling techniques showed that the decrements in scores were not associated with significant changes in the pattern of phoneme confusions. Consonant recognition was dominated by the features manner and place of articulation, but the features so nority, frication, voicing, and sibilance could also be detected. Vowe l recognition was dominated by the first two formant frequencies. The results are in broad agreement with the speech perception data for nor mal and hearing-impaired listeners for the type of audiometric configu ration simulated. The main discrepancy between the system and human da ta is the significantly lower recognition performance found for vowels , particularly when simulating normal hearing.