Paper: | AE-P2.6 |
Session: | Hearing Aids, Auditory Models and Physical Models |
Time: | Wednesday, May 17, 16:30 - 18:30 |
Presentation: |
Poster
|
Topic: |
Audio and Electroacoustics: Auditory Modeling and Hearing Aids |
Title: |
SMOOTH GMM BASED MULTI-TALKER SPECTRAL CONVERSION FOR SPECTRALLY DEGRADED SPEECH |
Authors: |
Chuping Liu, University of Southern California, United States; Qian-Jie Fu, House Ear Institute, United States; Shrikanth S. Narayanan, University of Southern California, United States |
Abstract: |
Because of the limited spectro-temporal resolution associated with the implant device, cochlear implant (CI) patients are more susceptible to talker variability than normal hearing (NH) listeners. In the present study, the effect of a smooth GMM based spectral conversion algorithm on multi-talker sentence recognition was tested in CI patients. In a model of CI speech processing (4-16 channels of spectrally degraded speech), talker distortion was significantly reduced with relatively few (~64) GMM components. CI patients’ sentence recognition was measured for one male (M1) and one female (F1) talker, as well as for spectrally converted speech (from M1 to F1 and from F1 to M1). Overall, CI users were sensitive to talker differences; some subjects performed better with M1, others with F1. After converting the spectrum of the less-understood talker to that of the better-understood talker, recognition of the less-understood talker’s speech was significantly improved. The results suggest that smooth GMM-based spectral conversion may improve CI patients’ multi-talker speech recognition. |