summary of the reverb challenge
play

Summary of the REVERB challenge .. Reinhold Haeb Umbach, Keisuke - PowerPoint PPT Presentation

http://reverb2014.dereverberation.com/ Summary of the REVERB challenge .. Reinhold Haeb Umbach, Keisuke Kinoshita, Emanuel Habets International AudioLabs Erlangen Volker Leutnant Marc Delcroix, Paderborn Univ. Takuya Yoshioka, Tomohiro


  1. http://reverb2014.dereverberation.com/ Summary of the REVERB challenge .. Reinhold Haeb ‐ Umbach, Keisuke Kinoshita, Emanuel Habets International AudioLabs Erlangen Volker Leutnant Marc Delcroix, Paderborn Univ. Takuya Yoshioka, Tomohiro Nakatani NTT Corporation Walter Kellermann, Sharon Gannot Bhiksha Raj Armin Sehr Carnegie Mellon Univ. Bar ‐ Ilan Univ. Beuth Univ. of Roland Maas Applied Sciences Berlin Univ. of Erlangen ‐ Nuremberg

  2. Outline - Motivation and design of the REVERB challenge - Summary of the participants’ systems - Result summary - The ASR results - The SE (Speech Enhancement) results - Concluding remarks 2

  3. Motivation  Recently, substantial progress made in the field of reverberant speech signal processing, including - Single- and multi-channel de-reverberation techniques - ASR techniques robust to reverberation  Lack of common evaluation framework REVERB challenge to provide a common evaluation framework for both ASR and SE studies 3

  4. Target acoustic scenarios - Reverberant - Moderate stationary noise (~ SNR* 20dB) - 1ch, 2ch and 8ch scenarios Fig: One of microphone arrays used * “S” includes direct signal and early reflections up to 50ms. 4

  5. The challenge data (1/ 2) - Based on Wall Street Journal Cambridge (WSJCAM0) 5K task Real recordings (RealData) * 1 & simulated data (SimData) * 2 - (Development and evaluation sets provided) - RealData for validity assessment in real reverb conditions - SimData for experiments in various reverb conditions (A part of SimData simulates RealData in terms of the reverb time) - Text prompts used for both data were the same. - Clean and multi-condition (simulated) training data provided * 1 RealData is available from the LDC catalog as a part of MC-WSJ-AV corpus (since April 2014). * 2 Materials required to generate SimData is available on our webpage. The data will soon be available http://catalog.ldc.upenn.edu/LDC2014S03 through the LDC catalog. 5

  6. The challenge data (2/ 2) - Acoustic conditions for SimData and RealData Reverb time (T 60 ) Distance between speaker and mic SimData 0.25s , 0.5s, 0.7s* near: 0.5m (Room1, 2, 3) far: 2.0m RealData 0.7s* near: ~ 1.0m far: > 2.5m * SimData room3 simulates RealData - Sound examples RealData (far) SimData (Room2, far) Male Female Male Female Clean/Headset Observed 6

  7. The challenge tasks: ASR and SE - ASR task - Evaluation criterion: Word Error Rate (WER) - SE task - Objective evaluation criteria - Intrusive measure (that requires reference clean speech) - Cepstrum distance (CD) - Freq-weighted segmental SNR (FWsegSNR) - Log likelihood ratio (LLR) - PESQ (optional) - Non-intrusive measure - Speech-to-reverb modulation ratio (SRMR) - Subjective evaluation criteria (web-based MUSHRA test) - Perceived amount of reverberation - Overall quality (i.e.,artifacts, distortions, remaining reverb and etc) - Same test & training data provided for both tasks 7

  8. Number of submissions - 27 participants (i.e., # of papers) - 18 submissions (incl. 49 systems) to the ASR task - 14 submissions (incl. 25 systems) to the SE task - Percentage of 1ch, 2ch and 8ch systems in each task - 8

  9. Quick introduction to the submitted participants’ systems 9

  10. A wide variety of approaches submitted Spatial filtering 1ch SE/FE Main focus of SE participants 10

  11. A wide variety of approaches submitted Robust feature Spatial filtering 1ch SE/FE Decoding Extraction/ normalization AM LM Main focus of ASR participants 11

  12. A wide variety of approaches submitted System Robust feature Spatial filtering 1ch SE/FE Decoding combination Extraction/ normalization AM LM Main focus of ASR participants 12

  13. A wide variety of approaches submitted System Robust feature Spatial filtering 1ch SE/FE Decoding combination Extraction/ normalization AM LM Adapt. Main focus of ASR participants Submission ranges from 1ch/multi ‐ channel SE algorithms to the ASR back ‐ end algorithms. 13

  14. Various approaches (1/ 4) System Robust feature Spatial filtering 1ch SE/FE Decoding combination Extraction/ normalization ‐ De ‐ reverb AM LM ‐ STFT domain ‐ Inverse filtering ‐ Linear prediction ‐ Correlation shaping Adapt. ‐ DOA detection based Beamformer ‐ Mask ‐ based approach ‐ Phase ‐ error filter ‐ Magnitude spec domain ‐ Estimation of nonnegative RIRs ‐ De ‐ noising (STFT, auditory ‐ feature domain) e.g., MVDR, delay ‐ sum, GSC, Mch ‐ WF. 14

  15. Various approaches (2/ 4) System Robust feature Spatial filtering 1ch SE/FE Decoding combination Extraction/ normalization AM LM ‐ De ‐ reverb ‐ Power/magnitude/auditory spec domain e.g., Exponential RIR model, Adapt. Linear prediction, Non ‐ negative Matrix Fact./Deconv., DNN/DRNN/DAE based dereverb ‐ Cepstral domain e.g., Cepstral smoothing, ML ‐ based inverse filter estimation ‐ De ‐ noising e.g., SS, MMSE ‐ STSA. 15

  16. Various approaches (3/ 4) System Robust feature Spatial filtering 1ch SE/FE Decoding combination Extraction/ normalization AM LM ‐ Robust features e.g., PLP, auditory/articulatory based features, modified cepstral features, Adapt. i ‐ vector, warped MVDR, etc... ‐ Normalizatoin e.g., CMS, VTLN, CMLLR, (H)LDA, 16

  17. Various approaches (4/ 4) System Robust feature Spatial filtering 1ch SE/FE Decoding combination Extraction/ normalization ‐ Acoustic model AM LM ‐ GMM ‐ SGMM ‐ DNN ‐ LSTM Adapt. ‐ Adaptation ‐ MLLR ‐ DNN ‐ adaptation ‐ System combination ‐ Training ‐ ROVER ‐ Clean/multi ‐ condition ‐ Multi ‐ stream HMM ‐ SAT ‐ Decoding ‐ ML/MMI/bMMI ‐ Minimum Bayes risk dec. 17

  18. Various approaches (4/ 4) System Robust feature Spatial filtering 1ch SE/FE Decoding combination Extraction/ normalization AM LM Adapt. 18

  19. 19 Now, the results... 

  20. Results already publicly available - Results for the ASR task http://reverb2014.dereverberation.com/result_asr.html - Results for the SE task http://reverb2014.dereverberation.com/result_se.html Note: More results (detailed/new/updated results) are available in participants’ papers. 20

  21. 21 Let’s start with the ASR results... 

  22. ASR results: baselines 100 HTK ‐ baseline (clean training) HTK ‐ baseline+CMLLR (clean training) HTK ‐ baseline WER (%) (multicondition training) 50 HTK ‐ baseline+CMLLR (multicondition training) Recognition of unprocessed 0 1ch observation Near Near Near Near Far Far Far Far Small room Mid. room Large room Large room SimData RealData 22

  23. ASR results: at a glance - All the submitted WERs (everything mixed, not a fair comparison) 100 HTK ‐ baseline (clean training) HTK ‐ baseline+CMLLR (clean training) HTK ‐ baseline WER (%) (multicondition training) 50 HTK ‐ baseline+CMLLR (multicondition training) Clean/Headset WERs 0 Near Near Near Near Far Far Far Far Small room Mid. room Large room Large room SimData RealData 23

  24. ASR results analysis with bubble chart - Relationship between (averaged) WER and # of mic., data and acoust. model The size of a circle indicates the # of systems in the corresponding category 24

  25. ASR results analysis with bubble chart Results per 1ch, 2ch and 8ch systems More microphones lead to better performance 25

  26. ASR results analysis with bubble chart Training data: “Clean” vs “multi-condition” vs “own data” ※ E.g., WSJ America, Data with different SNRs More training data (acoustic variety) lead to better performance 26

  27. ASR results analysis with bubble chart GMM -HMM recognizers vs DNN -HMM recognizers - The top-performing systems often employ DNN-HMM - Resultant performance may differ due to the front-end proc. and the DNN config. etc 27

  28. ASR results analysis: SimData vs RealData - Relationship between SimData scores and RealData scores SimData vs RealData SimData Room3 Far vs RealData Very strong correlation between SimData and RealData scores (Even stronger between SimData Room3 Far and RealData) 28

  29. ASR results: Some remarks... - Strategies often present in the top-performing systems include: - Some kind of dereverberation (STFT/Amp spec/feature domain) - Linear Multi-ch filtering (MVDR, DS, etc) often for denoising - Strong backend (e.g., DNN-HMM recognizer, sophisticated adaptation, robust feature extraction, multi-condition training) - System combination - However, it’s hard to tell the exact impact of each SE/ASR technique. (It’s something we should discover at this workshop!) - Some more works required to achieve the clean/headset performance. (E.g., for RealData, the headset WER is roughly 60% of the best performing system.) 29

  30. 30 Now, the SE part... 

  31. - An important question in the SE task- Most submissions managed to improve the objective measures (cf. webpage, presentations) , but how about their subjective qualities? 31

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend