- n
- f
- ne@e
- m
Ecien t Searc h Strategies in Hierarc hical P attern - - PDF document
Ecien t Searc h Strategies in Hierarc hical P attern Recogniti on Systems by Neera j Deshm ukh Joseph Picone Y u-Hung Kao Dept. of EE Inst. for Signal & Info. Pro c. Systems & Info. Sci. Lab. Boston
Digital Signal Processing Speech Input Learning Algorithm (Restimation of parameters) Digital Signal Processing Trained Models Model Structure Training Data
Training of Models in Speech Recognition
Speech Input Language Models Acoustic Models Hypothesis Generation the cat in the hat bat in the mat SEARCH Final recognized word sequence the cat in the hat got the rat." "
Continuous Speech Recognition System
Test Data
Figure 1: T raining and Recognition SchematicSentence level Word level Phone level
Hierarchical Model for Speech Recognition
HMM structure
Figure 2: Hierarc hical structureNodes having scores above FSVS pruning threshold Nodes pruned at this frame, as score < FSVS pruning threshold frames nodes in order
Frame Synchronous Viterbi Pruning
Figure 3: FSVS pruning algorithm During Viterbi b eam searc h, at the end200 400 600 800 1000 200 400 600 800 1000 1200 frames active sds delta_slew = 2*max_delta max_delta = 0.3 200 400 600 800 1000 200 400 600 800 1000 1200 frames active sds delta_slew = 2*max_delta max_delta = 0.3 prune_hyp_factor = 1.5 prune_hyp_thresh = 35
a: Viterbi b eam sea rch b: FSVS Figure 4: Memory usage for (sp eec h) sen tence recognition T yp e Sent. Comptn. Mem. W