Paper: | SLP-L10.6 |
Session: | Speaker Adaptation |
Time: | Friday, May 19, 11:40 - 12:00 |
Presentation: |
Lecture
|
Topic: |
Speech and Spoken Language Processing: Speaker adaptation and normalization (e.g., VTLN) |
Title: |
REGULARIZED ADAPTATION OF DISCRIMINATIVE CLASSIFIERS |
Authors: |
Xiao Li, Jeff Bilmes, University of Washington, Seattle, United States |
Abstract: |
We introduce a novel method for adapting discriminative classifiers (multi-layer perceptrons (MLPs) and support vector machines(SVMs)). Our method is based on the idea of regularization, whereby an optimization cost criterion to be minimized includes a penalty in accordance to how “complex” the system is. Specifically, our regularization term penalizes depending on how different an adapted system is from an unadapted system, thus avoiding the problem of overtraining when only a small amount of adaptation data is available. We justify this approach using a max-margin argument. We apply this technique to MLPs and produce a working real-time system for rapid adaptation of vowel classifiers in the context of the Vocal Joystick project. Overall, we find that our method outperforms all other MLP-based adaptation methods we are aware of. Our technique, however, is quite general and can be used whenever rapid adaptation of MLP or SVM classifiers are needed (e.g., from a speaker-independent to a speaker-dependent classifier in a hybrid MLP/HMM or SVM/HMM speech-recognition system). |