Paper: | SLP-P6.3 |
Session: | Speech Understanding, Translation, Applications and Systems |
Time: | Tuesday, May 16, 16:30 - 18:30 |
Presentation: |
Poster
|
Topic: |
Speech and Spoken Language Processing: Speech Understanding |
Title: |
SPEECH UTTERANCE CLASSIFICATION MODEL TRAINING WITHOUT MANUAL TRANSCRIPTIONS |
Authors: |
Ye-Yi Wang, Microsoft Research, United States; John Lee, Massachusetts Institute of Technology, United States; Alex Acero, Microsoft Research, United States |
Abstract: |
Speech utterance classification has been widely applied to a variety of spoken language understanding tasks, including call-routing, dialog systems and command and control. Most speech utterance classification systems adopt a data-driven statistical learning approach, which requires manually transcribed and annotated training data. In this paper we introduce a novel classification model training approach based on unsupervised language model adaptation. It only requires training utterances in wave files, together with the corresponding classification destinations of the utterances for modeling training. No manual transcription of the speech utterances is necessary. Experimental results show that the approach has achieved classification accuracy at the same level as the model trained with manual transcriptions but it is much cheaper to implement. |