Audio-Visual Multi-Channel Integration and Recognition of Overlapped Speech

You are here

Top Reasons to Join SPS Today!

1. IEEE Signal Processing Magazine
2. Signal Processing Digital Library*
3. Inside Signal Processing Newsletter
4. SPS Resource Center
5. Career advancement & recognition
6. Discounts on conferences and publications
7. Professional networking
8. Communities for students, young professionals, and women
9. Volunteer opportunities
10. Coming soon! PDH/CEU credits
Click here to learn more.

Audio-Visual Multi-Channel Integration and Recognition of Overlapped Speech

By: 
Jianwei Yu; Shi-Xiong Zhang; Bo Wu; Shansong Liu; Shoukang Hu; Mengzhe Geng; Xunying Liu; Helen Meng; Dong Yu

Automatic speech recognition (ASR) technologies have been significantly advanced in the past few decades. However, recognition of overlapped speech remains a highly challenging task to date. To this end, multi-channel microphone array data are widely used in current ASR systems. Motivated by the invariance of visual modality to acoustic signal corruption and the additional cues they provide to separate the target speaker from the interfering sound sources, this paper presents an audio-visual multi-channel based recognition system for overlapped speech. It benefits from a tight integration between a speech separation front-end and recognition back-end, both of which incorporate additional video input. A series of audio-visual multi-channel speech separation front-end components based on TF masking , Filter&Sum and mask-based MVDR neural channel integration approaches are developed. To reduce the error cost mismatch between the separation and the recognition components, the entire system is jointly fine-tuned using a multi-task criterion interpolation of the scale-invariant signal to noise ratio (Si-SNR) with either the connectionist temporal classification (CTC), or lattice-free maximum mutual information (LF-MMI) loss function. Experiments suggest that: the proposed audio-visual multi-channel recognition system outperforms the baseline audio-only multi-channel ASR system by up to 8.04% (31.68% relative) and 22.86% (58.51% relative) absolute WER reduction on overlapped speech constructed using either simulation or replaying of the LRS2 dataset respectively. Consistent performance improvements are also obtained using the proposed audio-visual multi-channel recognition system when using occluded video input with the lip region randomly covered up to 60%.

SPS on Twitter

  • Celebrate International Women's Day with SPS! This Tuesday, 8 March, join Dr. Neeli Prasad for "Unlocking the Poten… https://t.co/GDQIgjSpLs
  • Check out the SPS Education Short Courses, new at ! Earn PDH and CEU certificates by attending either in… https://t.co/1uYFNvltg7
  • We're partnering with the IEEE Humanitarian Activities on Wednesday, 2 March to bring you a new webinar, "Increasin… https://t.co/JzhaBl17UY
  • The DEGAS Webinar Series continues this Thursday, 3 March when Dr. Steven Smith present "Causal Inference on Networ… https://t.co/10kppomXdl
  • In the February issue of the Inside Signal Processing Newsletter, we talk to Dr. Oriol Vinyals, who discusses his j… https://t.co/XLQ7tpEq0A

SPS Videos


Signal Processing in Home Assistants

 


Multimedia Forensics


Careers in Signal Processing             

 


Under the Radar