efficient estimation of word
play

Efficient Estimation of Word Representation in Vector Space Topics - PowerPoint PPT Presentation

Efficient Estimation of Word Representation in Vector Space Topics Language Models in NLP o Markov Models (n-gram model) o Distributed Representation of words o Motivation for word vector model of data o Feedforward Neural Network


  1. Efficient Estimation of Word Representation in Vector Space

  2. Topics Language Models in NLP o Markov Models (n-gram model) o Distributed Representation of words o Motivation for word vector model of data o Feedforward Neural Network Language Model (Feedforward NNLM) o Recurrent Neural Network Language Model (Recurrent NNLM) o Continuous Bag of Words Recurrent NNLM o Skip-gram Recurrent NNLM o Results o References o

  3. ๐‘œ -gram model for NLP Traditional NLP models are based on prediction of next word given o previous ๐‘œ โˆ’ 1 words. Also known as ๐‘œ -gram model An ๐‘œ -gram model is defined as probability of a word ๐‘ฅ , given previous o words ๐‘ฆ 1 , ๐‘ฆ 2 โ€ฆ ๐‘ฆ ๐‘œโˆ’1 using ๐‘œ โˆ’ 1 ๐‘ขโ„Ž order Markov assumption Mathematically, the parameter o ๐‘Ÿ ๐‘ฅ ๐‘ฆ 1 , ๐‘ฆ 2 โ€ฆ ๐‘ฆ ๐‘œโˆ’1 = ๐‘‘๐‘๐‘ฃ๐‘œ๐‘ข ๐‘ฅ, ๐‘ฆ 1 , ๐‘ฆ 2 โ€ฆ ๐‘ฆ ๐‘œโˆ’1 ๐‘‘๐‘๐‘ฃ๐‘œ๐‘ข ๐‘ฆ 1 , ๐‘ฆ 2 โ€ฆ ๐‘ฆ ๐‘œโˆ’1 where ๐‘ฅ, ๐‘ฆ 1 , ๐‘ฆ 2 โ€ฆ ๐‘ฆ ๐‘œโˆ’1 โˆˆ ๐‘Š and ๐‘Š is some definite size vocabulary Above model is based on Maximum Likelihood estimation o Probability of occurrence of any sentence can be obtained by multiplying o the ๐‘œ -gram model of every word Estimation can be done using linear interpolation or discounting o methods

  4. Drawbacks associated with ๐‘œ -gram models Curse of dimensionality: large number of parameters to be learned even o with the small size of vocabulary ๐‘œ - gram model has discrete space, so itโ€™s difficult to generalize the o parameters for that model. On the other hand, generalization is easier when the model has continuous space Simple scaling up of ๐‘œ -gram models do not show expected performance o improvement for vocabularies containing limited data ๐‘œ -gram models do not perform well in word similarity tasks o

  5. Distributed representation of words as vectors Associate with each word in the vocabulary a distributed word feature vector in o โ„ ๐‘› genesis ๏ƒ  0.537 0.299 0.098 โ€ฆ 0.624 ๐‘› A vocabulary ๐‘Š of size ๐‘Š will therefore have ๐‘Š ร— ๐‘› free parameters, which o needs to learned using some learning algorithm. These distributed feature vectors can either be learned in an unsupervised o fashion as part of pre-training procedure or can also be learned in a supervised way as well.

  6. Why word vector model? This model is based on continuous space real variables, hence probability o distribution learn by generative models are smooth functions Therefore unlike the ๐‘œ -gram models, where if a sequence of words is not o present in the data corpus is not a big issue; generalization is better with this approach Multiple degrees of similarity : similarity between words goes beyond basic o syntactic and semantic regularities. For example: ๐‘ค๐‘“๐‘‘๐‘ข๐‘๐‘  ๐ฟ๐‘—๐‘œ๐‘• โˆ’ ๐‘ค๐‘“๐‘‘๐‘ข๐‘๐‘  ๐‘๐‘๐‘œ + ๐‘ค๐‘“๐‘‘๐‘ข๐‘๐‘ (๐‘‹๐‘๐‘›๐‘๐‘œ) โ‰ˆ ๐‘ค๐‘“๐‘‘๐‘ข๐‘๐‘ (๐‘…๐‘ฃ๐‘“๐‘“๐‘œ) ๐‘ค๐‘“๐‘‘๐‘ข๐‘๐‘  ๐‘„๐‘๐‘ ๐‘—๐‘ก โˆ’ ๐‘ค๐‘“๐‘‘๐‘ข๐‘๐‘  ๐บ๐‘ ๐‘๐‘œ๐‘‘๐‘“ + ๐‘ค๐‘“๐‘‘๐‘ข๐‘๐‘  ๐ฝ๐‘ข๐‘๐‘š๐‘ง โ‰ˆ ๐‘ค๐‘“๐‘‘๐‘ข๐‘๐‘  ๐‘†๐‘๐‘›๐‘“ Easier to train vector models on unsupervised data o

  7. Learning distributed word vector representations Feedforward Neural Network Language Model : Joint probability o distribution of words sequences is learned along with word feature vectors using feed forward neural network Recurrent Neural Network Language Models : These NNLM are based on o recurrent neural networks Continuous Bag of Words : It is based on log linear classifier, but the input o will be average of past and future word vectors. In short, here our goal is to predict word surrounding a context Continuous Skip-gram Model: It is also based on log linear classifier, but o here it will try to predict the past and future words surrounding a given word

  8. Feedforward Neural Network Language Model Initially proposed by Yoshua Bengio et al o It is slightly related to ๐‘œ -gram language model, as it aims to learn the o probability function of word sequences of length ๐‘œ Here input will be a concatenated feature vector of words o ๐‘ฅ ๐‘œโˆ’1 , ๐‘ฅ ๐‘œโˆ’2 โ€ฆ ๐‘ฅ 2 , ๐‘ฅ 1 and training criteria will be to predict the word ๐‘ฅ ๐‘œ Output of the model will give us the estimated probability of a given sequence o of ๐‘œ words Neural network architecture consists of a projection layer, a hidden layer of o neurons, output layer and a softmax function to evaluate the joint probability distribution of words

  9. Feedforward NNLM T R A I N I N G C O R P U S ๐‘ฅ ๐‘œโˆ’2 ๐‘ฅ ๐‘œโˆ’1 ๐‘ฅ 1 ๐‘ฅ 2 โ‹ฎ Lookup table of word vectors of size Input Projection Layer Concatenated input vector of size of Hidden Layer โ‹ฎ Output Layer โ‹ฎ Softmax function ๐‘„(๐‘ฅ ๐‘œ |๐‘ฅ ๐‘œโˆ’1 โ€ฆ ๐‘ฅ 2 , ๐‘ฅ 1 )

  10. Feedforward NNLM Fairly huge model in terms of free parameters o Neural network parameters consist of ๐‘œ โˆ’ 1 ร— ๐‘› ร— ๐ผ + ๐ผ ร— ๐‘Š o parameters Training criteria is to predict ๐‘œ ๐‘ขโ„Ž word o Uses forward propagation and backpropagation algorithm for training o using mini batch gradient descent Number of output layers in neural network can be reduced to log 2 ๐‘Š o using hierarchical softmax layers. This will significantly reduce the training time of model

  11. Recurrent Neural Network Language Model Initially implemented by Tomas Mikolov, but probably inspired by Yoshua o Bengioโ€™s seminal work on NNLM Uses a recurrent neural network, where input layer consists of the current o word vector and hidden neuron values of previous word Training objective is to predict the current word o Contrary to Feedforward NNLM, it keeps on building a kind of history of o previous words which got trained using the model. Therefore context window of analysis is variable here

  12. Recurrent NNLM T R A I N I N G C O R P U S ๐‘ฅ ๐‘ข Lookup table of word vectors of size Hidden Input ๐‘‘๐‘๐‘œ๐‘ข๐‘“๐‘ฆ๐‘ข(๐‘ข) Hidden โ‹ฎ Output Layer โ‹ฎ Softmax function

  13. Recurrent NNLM Requires less number of hidden units in comparison to feedforward NNLM, o though one may have to increase the same with increase in vocabulary size Stochastic gradient descent is used along with backpropagation algorithm to o train the model over several epochs Number of output layers can be reduced to log 2 ๐‘Š using hierarchical softmax o layers Recurrent NNLM models as much as twice reduction in perplexity as compared o to ๐‘œ -gram models In practice recurrent NNLM models are much faster to train than feedforward o NNLM models

  14. Continuous Bag of Words It is similar to feedforward NNLM with no hidden layer. This model only o consists of an input and an output layer In this model, words in sequences from past and future are input and they o are trained to predict the current sample Owing to its simplicity, this model can be trained on huge amount of data in o a small time as compared to other neural network models This model actually does the current word estimation provided context or a o sentence.

  15. Continuous Bag of Words ๐‘ฅ ๐‘œ+4 ๐‘ฅ ๐‘œ+3 T R A I N I N G C O R P U S input vectors Lookup table of word Softmax function ๐‘ฅ ๐‘œ+2 vectors of size โ‹ฎ ๐‘ฅ ๐‘œ+1 ๐‘ฅ ๐‘œ Average of ๐‘ฅ ๐‘œโˆ’1 ๐‘ฅ ๐‘œโˆ’2 ๐‘ฅ ๐‘œโˆ’3 Output ๐‘ฅ ๐‘œโˆ’4 Input Projection Layer Layer

  16. Continuous Skip-gram Model This model is similar to continuous bag of words model, its just the roles are o reversed for input and output Here model attempts to predict the words around the current word o Input layer consists of the word vector from single word, while multiple o output layers are connected to input layer

  17. Model Continuous Skip-gram T R A I N I N G C O R P U S ๐‘ฅ ๐‘œ Lookup table of word Input Projection vectors of size Layer Single word vector โ‹ฎ โ‹ฎ โ‹ฎ Softmax Softmax โ‹ฎ Softmax ๐‘ฅ ๐‘œโˆ’4 ๐‘ฅ ๐‘œ+4 ๐‘ฅ ๐‘œ+1 Output Layer

  18. Analyzing language models Perplexity : A measurement of how well a language model is able to o adapt the underlying probability distribution of a model Word error rate : Percentage of words misrecognized by the language o model Semantic Analysis : Deriving semantic analogies of word pairs, filling the o sentence with most logical word choice etc. These kind of tests are especially used for measuring the performance of word vectors. For example : Berlin : Germany :: Toronto : Canada Syntactic Analysis : For language model, it might be the construction of o syntactically correct parse tree, for testing word vectors one might look for predicting syntactic analogies such as : possibly : impossibly :: ethical : unethical

  19. Perplexity Comparison Perplexity of different models tested on Brown Corpus

  20. Perplexity Comparison Perplexity comparison of different models on Penn Treebank

  21. Sentence Completion Task WSJ Kaldi Rescoring

  22. Semantic Syntactic Tests

  23. Results

  24. Results Different models with 640 dimensional word vectors Training Time comparison of different models

  25. Microsoft Research Sentence Completion Challenge

  26. Complex Learned Relationships

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend