speech processing 15 492 18 492
play

Speech Processing 15-492/18-492 Speaker ID Who is speaking? - PowerPoint PPT Presentation

Speech Processing 15-492/18-492 Speaker ID Who is speaking? Speaker ID, Speaker Recognition Speaker ID, Speaker Recognition When do you use it When do you use it Security, Access Security, Access Speaker specific


  1. Speech Processing 15-492/18-492 Speaker ID

  2. Who is speaking? Speaker ID, Speaker Recognition � Speaker ID, Speaker Recognition � When do you use it � When do you use it � � Security, Access Security, Access � � Speaker specific modeling Speaker specific modeling �  Recognize the speaker and use their options Recognize the speaker and use their options  � Diacritization Diacritization �  In multi In multi- -speaker environments speaker environments   Assign speech to different people Assign speech to different people   Allow questions like did Fred agree or not. Allow questions like did Fred agree or not. 

  3. Voice Identity What makes a voice identity � What makes a voice identity � � Lexical Choice: Lexical Choice: �  Woo Woo- -hoo hoo, ,   I pity the fool … I pity the fool …  � Phonetic choice Phonetic choice � � Intonation and duration Intonation and duration � � Spectral qualities (vocal tract shape) Spectral qualities (vocal tract shape) � � Excitation Excitation �

  4. Voice Identity What makes a voice identity � What makes a voice identity � � Lexical Choice: Lexical Choice: �  Woo Woo- -hoo hoo, ,   I pity the fool … I pity the fool …  � Phonetic choice Phonetic choice � � Intonation and duration Intonation and duration � � Spectral qualities (vocal tract shape) Spectral qualities (vocal tract shape) � � Excitation Excitation � But which is most discriminative? � But which is most discriminative? �

  5. GMM Speaker ID Just looking at spectral part � Just looking at spectral part � � Which is sort of vocal tract shape Which is sort of vocal tract shape � Build a single Gaussian of MFCCs MFCCs � Build a single Gaussian of � � Means and Standard Deviation of all speech Means and Standard Deviation of all speech � � Actually build N Actually build N- -mixture Gaussian (32 or 64) mixture Gaussian (32 or 64) � Build a model for each speaker � Build a model for each speaker � Use test data and see which model its � Use test data and see which model its � closest to closest to

  6. GMM Speaker ID How close does it need to be? � How close does it need to be? � � One or two standard deviations? One or two standard deviations? � The set of speakers needs to be different � The set of speakers needs to be different � � If they are closest than one or two If they are closest than one or two stddev stddev � � You get confusion. You get confusion. � Should you have a “general” model � Should you have a “general” model � � Not one of the set of training speakers Not one of the set of training speakers �

  7. GMM Speaker ID � Works well on constrained tasks Works well on constrained tasks � � In similar acoustic conditions In similar acoustic conditions � � (not phone (not phone vs vs wide wide- -band) band) � � Same spoken style as training data Same spoken style as training data � � Cooperative users Cooperative users � � Doesn’t work well when Doesn’t work well when � � Different speaking style (conversation/lecture) Different speaking style (conversation/lecture) � � Shouting whispering Shouting whispering � � Speaker has a cold Speaker has a cold � � Different language Different language �

  8. Speaker ID Systems Training � Training � � Example speech from each speaker Example speech from each speaker � � Build models for each speaker Build models for each speaker � � (maybe an exception model too) (maybe an exception model too) � ID phase � ID phase � � Compare test speech to each model Compare test speech to each model � � Choose “closest” model (or none) Choose “closest” model (or none) �

  9. Basic Speaker ID system

  10. Accuracy Works well on smaller sets � Works well on smaller sets � � 20 20- -50 speakers 50 speakers � As number of speakers increase � As number of speakers increase � � Models begin to overlap Models begin to overlap – – confuse speakers confuse speakers � What can we do to get better distinctions � What can we do to get better distinctions �

  11. What about transitions Not just modeling isolates frames � Not just modeling isolates frames � Look at phone sequences � Look at phone sequences � But ASR � But ASR � � Lots of variation Lots of variation � � Limited amount of phonetic space Limited amount of phonetic space � What about lots of ASR engines � What about lots of ASR engines �

  12. Phone-based Speaker ID Use *lots* of ASR engines � Use *lots* of ASR engines � � But they need to be different ASR engines But they need to be different ASR engines � Use ASR engines from lots of different � Use ASR engines from lots of different � languages languages � It doesn’t matter what language the speech is It doesn’t matter what language the speech is � � Use many different ASR engines Use many different ASR engines � � Gives lots of variation Gives lots of variation � Build models of what phones are � Build models of what phones are � recognized recognized � Actually we use HMM states not phones Actually we use HMM states not phones �

  13. Phone-based SID (Jin)

  14. Phone-based Speaker ID Much better distinctions for larger datasets � Much better distinctions for larger datasets � Can work with 100 plus voices � Can work with 100 plus voices � Slightly more robust across styles/channels � Slightly more robust across styles/channels �

  15. But we need more … Combined models � Combined models � � GMM models GMM models � � Ph Ph- -based models based models � � Combine them Combine them � � Slightly better results Slightly better results � What else … � What else … � � Prosody (duration and F0) Prosody (duration and F0) �

  16. Can VC beat Speaker-ID Can we fake voices? � Can we fake voices? � Can we fool Speaker ID systems? � Can we fool Speaker ID systems? � Can we make lots of money out of it? � Can we make lots of money out of it? � Yes to the first two � Yes to the first two � � Jin, Jin, Toth Toth, Black and Schultz ICASSP2008 , Black and Schultz ICASSP2008 �

  17. Training/Testing Corpus � LDC CSR LDC CSR- -I (WSJ0) I (WSJ0) � � US English studio read speech US English studio read speech � � 24 Male speakers 24 Male speakers � � 50 sentences training, 5 test 50 sentences training, 5 test � � Plus 40 additional training sentences Plus 40 additional training sentences � � Sentence average length is 7s. Sentence average length is 7s. � � VT Source speakers VT Source speakers � � Kal_diphone Kal_diphone (synthetic speech) (synthetic speech) � � US English male natural speaker (not all sentences) US English male natural speaker (not all sentences) �

  18. Experiment I VT GMM � VT GMM � � Kal_diphone Kal_diphone source speaker source speaker � � GMM train 50 sentences GMM train 50 sentences � � GMM transform 5 test sentences GMM transform 5 test sentences � SID GMM � SID GMM � � Train 50 sentences Train 50 sentences � � (Test natural 5 sentences, 100% correct) (Test natural 5 sentences, 100% correct) �

  19. GMM-VT vs GMM-SID VT fools GMM- -SID 100% of the time SID 100% of the time � VT fools GMM � Hello

  20. GMM-VT vs GMM-SID � Not surprising (others show this) Not surprising (others show this) � � Both optimizing spectral properties Both optimizing spectral properties � � These used the same training set These used the same training set � � (different training sets doesn’t change result) (different training sets doesn’t change result) � � VT output voices sounds “bad” VT output voices sounds “bad” � � Poor excitation and voicing decision Poor excitation and voicing decision � � Human can distinguish VT Human can distinguish VT vs vs Natural Natural � � Actually GMM Actually GMM- -SID can distinguish these too SID can distinguish these too � � If VT included in training set If VT included in training set �

  21. GMM-VT vs Phone-SID � VT is always S17, S24 or S20 VT is always S17, S24 or S20 � � Kal_diphone Kal_diphone is recognized as S17 and S24 is recognized as S17 and S24 � source speaker SID seems to recognized source � Phone Phone- -SID seems to recognized speaker �

  22. What about Synthetic Speech? Clustergen: CG : CG � Clustergen � � Statistical Parametric Synthesizer Statistical Parametric Synthesizer � � MLSA filter for MLSA filter for resynthesis resynthesis � Clunits: CL : CL � Clunits � � Unit Selection Synthesizer Unit Selection Synthesizer � � Waveform concatenation Waveform concatenation �

  23. Synth vs GMM-SID Smaller is better � Smaller is better �

  24. Synth vs Phone-SID Smaller is better � Smaller is better � Opposite order from GMM- -SID SID � Opposite order from GMM �

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend