ICASSP 2006 - May 15-19, 2006 - Toulouse, France

Technical Program

Paper Detail

Paper:SS-13.2
Session:Speech Translation for Cross-Lingual Communication
Time:Friday, May 19, 16:50 - 17:10
Presentation: Special Session Lecture
Topic: Special Sessions: Speech translation for cross-lingual communication
Title: Speech Recognition Engineering issues in Speech to Speech Translation System Design for Low Resource Languages and Domains
Authors: Shrikanth S. Narayanan, Panayiotis G. Georgiou, Abhinav Sethy, Dagen Wang, Murtaza Bulut, Shiva Sundaram, Emil Ettalaie, Sankaranarayanan Ananthakrishnan, University of Southern California, United States; Horacio Franco, Kristin Precoda, Dimitra Vergyri, Jing Zheng, Wen Wang, Ramana Rao Gadde, Martin Graciarena, Victor Abrash, Michael Frandsen, Colleen Richey, SRI International, United States
Abstract: Engineering automatic speech recognition (ASR) for speech to speech (S2S) translation systems, especially targeting languages and domains that do not have readily available spoken language resources, is immensely challenging due to a number of reasons. In addition to contending with the conventional data-hungry speech acoustic and language modeling needs, these designs have to accommodate varying requirements imposed by the domain needs and characteristics, target device and usage modality (such as phrase-based, or spontaneous free form interactions, with or without visual feedback) and huge spoken language ariability arising due to socio-linguistic and cultural differences of the users. This paper, using case studies of creating speech translation systems between English and languages such as Pashto and Farsi, describes some of the practical issues and the solutions that were developed for multilingual ASR development. These include novel acoustic and language modeling strategies such as language adaptive recognition, active-learning based language modeling, class-based language models that can better exploit resource poor language data, efficient search strategies, including N-best and confidence generation to aid multiple hypotheses translation, use of dialog information and clever interface choices to facilitate ASR, and audio interface design for meeting both usability and robustness requirements.



IEEESignal Processing Society

©2018 Conference Management Services, Inc. -||- email: webmaster@icassp2006.org -||- Last updated Friday, August 17, 2012