ICASSP 2006 - May 15-19, 2006 - Toulouse, France

Technical Program

Paper Detail

Paper:SLP-P7.4
Session:Audio-visual and Multimodal Processing
Time:Wednesday, May 17, 10:00 - 12:00
Presentation: Poster
Topic: Speech and Spoken Language Processing: Speech/voice-based human-computer interfaces (HCI)
Title: Articulatory Feature Classification using Surface Electromyography
Authors: Szu-Chen Jou, Carnegie Mellon University, United States; Lena Maier-Hein, Universitaet Karlsruhe, Germany; Tanja Schultz, Alex Waibel, Carnegie Mellon University, United States
Abstract: In this paper, we present an approach for articulatory feature classification based on surface electromyographic signals generated by the facial muscles. With parallel recorded audible speech and electromyographic signals, experiments are conducted to show the anticipatory behavior of electromyographic signals with respect to speech signals. On average, we found that the signals to be time delayed by 0.02 to 0.12 second. Furthermore, it is shown that different articulators have different anticipatory behavior. With offset-aligned signals, we improved the average F-score of the articulatory feature classifiers in our baseline system from 0.467 to 0.502.



IEEESignal Processing Society

©2018 Conference Management Services, Inc. -||- email: webmaster@icassp2006.org -||- Last updated Friday, August 17, 2012