Paper: | MLSP-P4.11 |
Session: | Audio and Communication Applications |
Time: | Thursday, May 18, 14:00 - 16:00 |
Presentation: |
Poster
|
Topic: |
Machine Learning for Signal Processing: Speech and Audio Processing Applications |
Title: |
Latent Dirichlet Decomposition for Single Channel Speaker Separation |
Authors: |
Bhiksha Raj, Mitsubishi Electric Research Laboratories, United States; Madhusudana Shashanka, Boston University Hearing Research Center, United States; Paris Smaragdis, Mitsubishi Electric Research Laboratories, United States |
Abstract: |
We present an algorithm for the separation of multiple speakers from mixed single-channel recordings by latent variable decomposition of the speech spectrogram. We model each magnitude spectral vector in the short-time Fourier transform of a speech signal as the outcome of a discrete random process that generates frequency bin indices. The distribution of the process is modeled as a mixture of multinomial distributions, such that the mixture weights of the component multinomials vary from analysis window to analysis window. The component multinomials are assumed to be speaker specific and are learned from training signals for each speaker. We model the prior distribution of the mixture weights for each speaker as a Dirichlet distribution. The distributions representing magnitude spectral vectors for the mixed signal are decomposed into mixtures of the multinomials for all component speakers. The frequency distribution i.e. the spectrum for each speaker is reconstructed from this decomposition. |