Paper: | SS-13.4 |
Session: | Speech Translation for Cross-Lingual Communication |
Time: | Friday, May 19, 17:30 - 17:50 |
Presentation: |
Special Session Lecture
|
Topic: |
Special Sessions: Speech translation for cross-lingual communication |
Title: |
Integrating Speech Recognition and Machine Translation: Where Do We Stand? |
Authors: |
Evgeny Matusov, Stephan Kanthak, Hermann Ney, University of Technology Aachen (RWTH), Germany |
Abstract: |
This paper describes state-of-the-art interfaces between speech recognition and machine translation. We modify two different machine translation systems to effectively process dense speech recognition lattices. In addition, we describe how to fully integrate speech translation with machine translation based on weighted finite-state transducers. With a thorough set of experiments, we show that both the acoustic model scores and the source language model positively and significantly affect the translation quality. We have found consistent improvements on three different corpora compared with translations of single best recognition results. |