subproject ii
play

Subproject II: Robustness in Speech Recognition Members (1/2) - PowerPoint PPT Presentation

Subproject II: Robustness in Speech Recognition Members (1/2) Jen-Tzung Chien Hsiao-Chuan Wang (Co-PI) (PI) National Cheng Kung National Tsing Hua University University Jeih-Weih Hung Lin-shan Lee (Co-PI) National Taiwan National Chi


  1. Subproject II: Robustness in Speech Recognition

  2. Members (1/2) Jen-Tzung Chien Hsiao-Chuan Wang (Co-PI) (PI) National Cheng Kung National Tsing Hua University University Jeih-Weih Hung Lin-shan Lee (Co-PI) National Taiwan National Chi Nan University University Sin-Horng Chen Hsin-min Wang National Chiao Academia Sinica Tung University

  3. Members (2/2) Yih-Ru Wang National Chiao Tung University Yuan-Fu Liao National Taipei University of Technology Berlin Chen National Taiwan Normal University

  4. Research Theme Signal Level Input Feature Signal speech Extraction Processing & Transformation Speech Decoding Output including Word Graph Rescoring Recognition Results Adaptive Adaptive Adaptive HMM Pronunciation Language Models Lexicon Models Model Level Lexical level

  5. Research Roadmap Current Achievements � � Future Directions & Applications •Speech enhancement & •Speech recognition in different wavelet processing adverse environments, e.g. car, home, etc. •Cepstral moment normalization & temporal filtering & •Robust broadcast news •Microphone array and noise transcription cancellation approaches •Discriminative adaptation for •Lecture speech recognition acoustic and linguistic models •Spontaneous speech recognition •Maximum entropy modeling & data mining algorithm •Next generation automatic speech recognition •Robust language modeling •Powerful machine learning approaches for complicated robustness problems

  6. Signal Level Approaches � Speech Enhancement – Harmonic retaining, perceptual factor analysis, etc. � Robust Feature Representation – Higher-order cepstral moment normalization, data-driven temporal filtering, etc. � Microphone Array Processing – Microphone array with post-filtering, etc. � Missing-Feature Approach – Sub-space missing feature imputation and environment sniffing, mismatch-aware stochastic matching, etc.

  7. Higher-Order Cepstral Moment Normalization (HOCMN) (1/3) � Cepstral Feature Normalization Widely Used – CMS: normalizing the first moment – CMVN: normalizing the first and second moments – HEQ: normalizing the full distribution (all order moments) – How about normalizing a few higher order moments only? – Disturbances of larger magnitudes may be the major sources of recognition errors, which are better reflected in higher order moments

  8. Higher-Order Cepstral Moment Normalization (HOCMN) (2/3) � Experimental results : Aurora 2, clean condition training, word accuracy averaged over 0~20dB and all types of noise (sets A,B,C) 83.00 82.00 (b) 81.00 (a) 80.00 CMVN (L=86) 79.00 78.00 (a) HOCMN[1,N] (full-utterance) (1st and N-th moments 77.00 normalized) (b) HOCMN[1,N](L=86) 76.00 CMVN 75.00 N (even integer) 74.00 0 10 20 30 40 50 60 70

  9. Higher-Order Cepstral Moment Normalization (HOCMN) (3/3) � Experimental Results : Aurora 2, clean condition training, word accuracy averaged over 0~20dB for each type of noise condition 86.00 Set A Set B Set C 84.00 82.00 80.00 CMVN 78.00 HOCMN[1,5,100] HEQ 76.00 74.00 72.00 n y e r t . C C . . t t n a o e g g l g r a n o b C o . . i v e v v w a y t t i b A r p A e A i r t a t b a b u r a e S w i i t r u B a B C A A S h b t t S S x s u t t t e E e e e S R S S S � HOCMN is significantly better than CMVN for all types of noise � HOCMN is better than HEQ in most types of noise except for the “Subway” and “Street” noise

  10. Data-Driven Temporal Filtering � Developed filters were performed on the temporal domain of the original features � These filters can be derived in a data-driven manner according to the criteria of PCA/LDA/MCE � They can be integrated with Cepstral mean and variance normalization (CMVN) to achieve further performance

  11. Microphone Array Processing (1/3) � Integrated with Model Level Approaches (MLLR) Model Adaptation MLLR Adapted HMM Speech Input Initial HMM Adaptation Parameters Using Parameters Microphone Array Delay Delay-and-Sum Enhanced Speech Result Estimator Beamformer signal Recognition Speech Recognition Speech Enhancement Using Time Domain Coherence Measure (TDCM)

  12. Microphone Array Processing (2/3) � Further Improved with Wiener Filtering and Spectral Weighting Function (SWF) x 1 x X τ W Improved Weight ˆ 2 1 Wiener Filter Selection ˆ W FFT x τ ˆ 3 2 ˆ S W s ˆ ╳ IFFT x τ 4 ˆ 3 Spectral X Weighting Function x X Delay-and-Sum FFT Beamformer

  13. Microphone Array Processing (3/3) � Applications for In-Car Speech Recognition – Power Spectral Coherence Measure (PSCM) used to estimate the time delay Microphone Array Air Air 45º Conditioner Conditioner wheel 90cm speaker personal Fan noise computer Physical configuration Configuration in car

  14. Model Level Approaches � Improved Parallel Model Combination � Bayesian Learning of Speech Duration Models � Aggregate a Posteriori Linear Regression Adaptation

  15. Aggregate a Posteriori Linear Regression (AAPLR) (1/3) � Discriminative Linear Regression Adaptation � Prior Density of Regression Matrix is Incorporated to Construct Bayesian Learning Capabilities � Closed-form Solution Obtained for Rapid Adaptation Prior information of Discriminative criterion regression matrix AAPLR Bayesian Closed form Learning solution

  16. Aggregate a Posteriori Linear Regression (AAPLR) (2/3) ˆ λ ˆ W W p ( X , m , ) g ( ) N M , m n r m r m � MAPLR ∑∑ = = ˆ ˆ J ( W ) R ( W W ) log MAPLR p ( X ) = = m 1 1 n , m n λ p ( X W , ) P g ( W ) N M 1 m ∑∑ � AAPLR = , m n r m m r J ( W ) AAPLR M p ( X ) = = m 1 n 1 m , n ─ aggregated over all model classes m with probabilities P m � Discriminative Training N ( ) M 1 m ∑∑ = AAPLR l W ( ) J d AAPLR m M = = m 1 n 1 η 1 / ⎧ ⎫ 1 ∑ = λ − η λ AAPLR ⎨ ⎬ d g ( X ; , W ) log exp[ g ( X ; , W )] ( ) − ( ) m m m r m j j r j ⎩ M 1 ⎭ ≠ m j λ = λ g ( X ; , W ) log{ p ( X W , ) g ( W )} , m m r m n r m r

  17. Aggregate a Posteriori Linear Regression (AAPLR) (3/3) � Comparison with Other Approaches Estimation Criterion Discriminative Bayesian Closed- adaptation learning form solution ML MAP MCE MMI AAP ○ MLLR No No Yes ○ MAPLR No Yes Yes ○ MCELR Yes No No ○ ○ CMLLR Yes No Yes ○ ○ AAPLR Yes Yes Yes

  18. Lexical Level Approaches � Pronunciation Modeling for Spontaneous Mandarin Speech � Language Model Adaptation – Latent Semantic Analysis and Smoothing – Maximum Entropy Principle � Association Pattern Language Model

  19. Pronunciation Modeling for Spontaneous Mandarin Speech � Automatically Constructing Multiple-pronunciation Lexicon using a Three-stage Framework to Reduce Confusion Introduced by the Added Pronunciations Automatically Ranking the Keeping only the generating possible pronunciations to necessary surface forms but avoid confusion across pronunciations to avoiding confusion different words avoid confusion across across different words different words

  20. Association Pattern Language Model (1/5) � N-grams Consider only Local Relations � Trigger pairs Consider Long-distance Relations, but only for Two Associated Words � Word Associations Can Be Expanded for More than Two Distant Words � A New Algorithm to Discover Association Patterns via Data Mining Techniques

  21. Association Pattern Language Model (2/5) � Bigram & Trigram trigram ... bigram Twin bigram Towers bigram bigram Sept. bigram 11 bigram George Bush trigram � Trigger Pairs trigger pair ... bigram Twin bigram Towers bigram bigram Sept. bigram 11 bigram George Bush trigger pair

  22. Association Pattern Language Model (3/5) � Association Patterns association pattern ... bigram Twin bigram Towers bigram bigram Sept. bigram 11 bigram Bush George association pattern

  23. Association Pattern Language Model (4/5) � Association Pattern Mining Procedure

  24. Association Pattern Language Model (5/5) � Association Pattern Set Ω AS Covering Different Association Steps Constructed � Merge Mutual Information of All Association Patterns q p ( W , w ) − → = a 1 j q MI ( W w ) log − 1 a j q p ( W ) p ( w ) − a 1 j L S ∑ ∑ ∑ = + → s , q s log p ( W ) log p ( w ) MI ( W w ) − AS q a 1 j = = s , q → ∈ Ω s q 1 s 1 W w − j AS a 1 � Association Pattern n -gram Estimated ~ = + log p ( W ) a log p ( W ) a log p ( W ) 1 AS 2

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend