Paper: | MMSP-P3.4 |
Session: | Multimedia Database, Content Retrieval, Joint Processing and Standards |
Time: | Wednesday, May 17, 16:30 - 18:30 |
Presentation: |
Poster
|
Topic: |
Multimedia Signal Processing: Joint audio, image, video, graphic signal processing |
Title: |
FEATURE EXTRACTION FROM TALKING MOUTHS FOR VIDEO-BASED BI-MODAL SPEAKER VERIFICATION |
Authors: |
Hua Ouyang, Tan Lee, W. N. Chan, Chinese University of Hong Kong, Hong Kong SAR of China |
Abstract: |
As the low-cost video transmission becomes popular, video-based bi-modal (audio and visual) authentication has great potential in various applications that require access control over handheld terminals. In this paper, we propose to use the averaged mouth image (AMI) for speaker verification. The AMI is computed by averaging properly aligned mouth images over the whole video sequence. Despite its simplicity, the AMI not only contains appearance information but also describes stylistic articulation gestures of individual speakers. The AMI is found to be fairly invariant against the spoken content. The experimental results show that the AMI based features are very effective in discriminating speaking persons. Explicit and precise extraction of lip contours or other feature points are not required. For bi-modal verification, the proposed video features are found to be highly complementary to the audio features. |