AUDITORY-VISUAL SPEECH RECOGNITION BY HEARING-IMPAIRED SUBJECTS - CONSONANT RECOGNITION, SENTENCE RECOGNITION, AND AUDITORY-VISUAL INTEGRATION

Citation
Kw. Grant et al., AUDITORY-VISUAL SPEECH RECOGNITION BY HEARING-IMPAIRED SUBJECTS - CONSONANT RECOGNITION, SENTENCE RECOGNITION, AND AUDITORY-VISUAL INTEGRATION, The Journal of the Acoustical Society of America, 103(5), 1998, pp. 2677-2690
Citations number
43
Categorie Soggetti
Acoustics
Volume
103
Issue
5
Year of publication
1998
Part
1
Pages
2677 - 2690
Database
ISI
SICI code
Abstract
Factors leading to Variability in auditory-visual (AV) speech recognit ion include the subject's ability to extract auditory (A) and visual ( V) signal-related cues, the integration of A and V cues, and the use o f phonological, syntactic, and semantic context. In this study, measur es of A, V, and AV recognition of medial consonants in isolated nonsen se syllables and of words in sentences were obtained in a group of 29 hearing-impaired subjects. The test materials were presented in a back ground of speech-shaped noise at 0-dB signal-to-noise ratio. Most subj ects achieved substantial AV benefit for both sets of materials relati ve to A-alone recognition performance. However, there was considerable variability in AV speech recognition both in terms of the overall rec ognition score achieved and in the amount of audiovisual gain. To acco unt for this variability, consonant confusions were analyzed in terms of phonetic features to determine the degree of redundancy between A a nd V sources of information. In addition, a measure of integration abi lity was derived for each subject using recently developed models of A V integration. The results indicated that (1) AV feature reception was determined primarily by visual place cues and auditory voicing+manner cues, (2) the ability to integrate A and V consonant cues varied sign ificantly across subjects, with better integrators achieving more AV b enefit, and (3) significant intra-modality correlations were found bet ween consonant measures and sentence measures, with AV consonant score s accounting for approximately 54% of the variability observed for AV sentence recognition. Integration modeling results suggested that spee chreading and AV integration training could be useful for some individ uals, potentially providing as much as 26% improvement in AV consonant recognition.