Efficient Estimation of Word Representation in Vector Space Topics - - PowerPoint PPT Presentation

โ–ถ
efficient estimation of word
SMART_READER_LITE
LIVE PREVIEW

Efficient Estimation of Word Representation in Vector Space Topics - - PowerPoint PPT Presentation

Efficient Estimation of Word Representation in Vector Space Topics Language Models in NLP o Markov Models (n-gram model) o Distributed Representation of words o Motivation for word vector model of data o Feedforward Neural Network


slide-1
SLIDE 1

Efficient Estimation of Word Representation in Vector Space

slide-2
SLIDE 2

Topics

  • Language Models in NLP
  • Markov Models (n-gram model)
  • Distributed Representation of words
  • Motivation for word vector model of data
  • Feedforward Neural Network Language Model (Feedforward NNLM)
  • Recurrent Neural Network Language Model (Recurrent NNLM)
  • Continuous Bag of Words Recurrent NNLM
  • Skip-gram Recurrent NNLM
  • Results
  • References
slide-3
SLIDE 3

๐‘œ-gram model for NLP

  • Traditional NLP models are based on prediction of next word given

previous ๐‘œ โˆ’ 1 words. Also known as ๐‘œ-gram model

  • An ๐‘œ-gram model is defined as probability of a word ๐‘ฅ, given previous

words ๐‘ฆ1, ๐‘ฆ2 โ€ฆ ๐‘ฆ๐‘œโˆ’1 using ๐‘œ โˆ’ 1 ๐‘ขโ„Ž order Markov assumption

  • Mathematically, the parameter

๐‘Ÿ ๐‘ฅ ๐‘ฆ1, ๐‘ฆ2 โ€ฆ ๐‘ฆ๐‘œโˆ’1 = ๐‘‘๐‘๐‘ฃ๐‘œ๐‘ข ๐‘ฅ, ๐‘ฆ1, ๐‘ฆ2 โ€ฆ ๐‘ฆ๐‘œโˆ’1 ๐‘‘๐‘๐‘ฃ๐‘œ๐‘ข ๐‘ฆ1, ๐‘ฆ2 โ€ฆ ๐‘ฆ๐‘œโˆ’1 where ๐‘ฅ, ๐‘ฆ1, ๐‘ฆ2 โ€ฆ ๐‘ฆ๐‘œโˆ’1 โˆˆ ๐‘Š and ๐‘Š is some definite size vocabulary

  • Above model is based on Maximum Likelihood estimation
  • Probability of occurrence of any sentence can be obtained by multiplying

the ๐‘œ-gram model of every word

  • Estimation can be done using linear interpolation or discounting

methods

slide-4
SLIDE 4

Drawbacks associated with ๐‘œ-gram models

  • Curse of dimensionality: large number of parameters to be learned even

with the small size of vocabulary

  • ๐‘œ-gram model has discrete space, so itโ€™s difficult to generalize the

parameters for that model. On the other hand, generalization is easier when the model has continuous space

  • Simple scaling up of ๐‘œ-gram models do not show expected performance

improvement for vocabularies containing limited data

  • ๐‘œ-gram models do not perform well in word similarity tasks
slide-5
SLIDE 5

Distributed representation of words as vectors

  • Associate with each word in the vocabulary a distributed word feature vector in

โ„๐‘› genesis ๏ƒ 

  • A vocabulary ๐‘Š of size ๐‘Š will therefore have ๐‘Š ร— ๐‘› free parameters, which

needs to learned using some learning algorithm.

  • These distributed feature vectors can either be learned in an unsupervised

fashion as part of pre-training procedure or can also be learned in a supervised way as well.

0.537 0.299 0.098 โ€ฆ 0.624 ๐‘›

slide-6
SLIDE 6

Why word vector model?

  • This model is based on continuous space real variables, hence probability

distribution learn by generative models are smooth functions

  • Therefore unlike the ๐‘œ-gram models, where if a sequence of words is not

present in the data corpus is not a big issue; generalization is better with this approach

  • Multiple degrees of similarity : similarity between words goes beyond basic

syntactic and semantic regularities. For example: ๐‘ค๐‘“๐‘‘๐‘ข๐‘๐‘  ๐ฟ๐‘—๐‘œ๐‘• โˆ’ ๐‘ค๐‘“๐‘‘๐‘ข๐‘๐‘  ๐‘๐‘๐‘œ + ๐‘ค๐‘“๐‘‘๐‘ข๐‘๐‘ (๐‘‹๐‘๐‘›๐‘๐‘œ) โ‰ˆ ๐‘ค๐‘“๐‘‘๐‘ข๐‘๐‘ (๐‘…๐‘ฃ๐‘“๐‘“๐‘œ) ๐‘ค๐‘“๐‘‘๐‘ข๐‘๐‘  ๐‘„๐‘๐‘ ๐‘—๐‘ก โˆ’ ๐‘ค๐‘“๐‘‘๐‘ข๐‘๐‘  ๐บ๐‘ ๐‘๐‘œ๐‘‘๐‘“ + ๐‘ค๐‘“๐‘‘๐‘ข๐‘๐‘  ๐ฝ๐‘ข๐‘๐‘š๐‘ง โ‰ˆ ๐‘ค๐‘“๐‘‘๐‘ข๐‘๐‘  ๐‘†๐‘๐‘›๐‘“

  • Easier to train vector models on unsupervised data
slide-7
SLIDE 7

Learning distributed word vector representations

  • Feedforward Neural Network Language Model : Joint probability

distribution of words sequences is learned along with word feature vectors using feed forward neural network

  • Recurrent Neural Network Language Models : These NNLM are based on

recurrent neural networks

  • Continuous Bag of Words : It is based on log linear classifier, but the input

will be average of past and future word vectors. In short, here our goal is to predict word surrounding a context

  • Continuous Skip-gram Model: It is also based on log linear classifier, but

here it will try to predict the past and future words surrounding a given word

slide-8
SLIDE 8

Feedforward Neural Network Language Model

  • Initially proposed by Yoshua Bengio et al
  • It is slightly related to ๐‘œ-gram language model, as it aims to learn the

probability function of word sequences of length ๐‘œ

  • Here input will be a concatenated feature vector of words

๐‘ฅ๐‘œโˆ’1, ๐‘ฅ๐‘œโˆ’2 โ€ฆ ๐‘ฅ2, ๐‘ฅ1and training criteria will be to predict the word ๐‘ฅ๐‘œ

  • Output of the model will give us the estimated probability of a given sequence
  • f ๐‘œ words
  • Neural network architecture consists of a projection layer, a hidden layer of

neurons, output layer and a softmax function to evaluate the joint probability distribution of words

slide-9
SLIDE 9

Feedforward NNLM

๐‘ฅ๐‘œโˆ’1 ๐‘ฅ๐‘œโˆ’2 ๐‘ฅ2 ๐‘ฅ1 โ‹ฎ Lookup table of word vectors

  • f size

Concatenated input vector of size of โ‹ฎ โ‹ฎ Softmax function ๐‘„(๐‘ฅ๐‘œ|๐‘ฅ๐‘œโˆ’1 โ€ฆ ๐‘ฅ2, ๐‘ฅ1) Input Projection Layer Output Layer Hidden Layer T R A I N I N G C O R P U S

slide-10
SLIDE 10

Feedforward NNLM

  • Fairly huge model in terms of free parameters
  • Neural network parameters consist of ๐‘œ โˆ’ 1 ร— ๐‘› ร— ๐ผ + ๐ผ ร— ๐‘Š

parameters

  • Training criteria is to predict ๐‘œ๐‘ขโ„Ž word
  • Uses forward propagation and backpropagation algorithm for training

using mini batch gradient descent

  • Number of output layers in neural network can be reduced to log2 ๐‘Š

using hierarchical softmax layers. This will significantly reduce the training time of model

slide-11
SLIDE 11

Recurrent Neural Network Language Model

  • Initially implemented by Tomas Mikolov, but probably inspired by Yoshua

Bengioโ€™s seminal work on NNLM

  • Uses a recurrent neural network, where input layer consists of the current

word vector and hidden neuron values of previous word

  • Training objective is to predict the current word
  • Contrary to Feedforward NNLM, it keeps on building a kind of history of

previous words which got trained using the model. Therefore context window

  • f analysis is variable here
slide-12
SLIDE 12

Recurrent NNLM

๐‘ฅ๐‘ข Lookup table of word vectors

  • f size

โ‹ฎ โ‹ฎ Softmax function Output Layer Hidden ๐‘‘๐‘๐‘œ๐‘ข๐‘“๐‘ฆ๐‘ข(๐‘ข) T R A I N I N G C O R P U S Input Hidden

slide-13
SLIDE 13

Recurrent NNLM

  • Requires less number of hidden units in comparison to feedforward NNLM,

though one may have to increase the same with increase in vocabulary size

  • Stochastic gradient descent is used along with backpropagation algorithm to

train the model over several epochs

  • Number of output layers can be reduced to log2 ๐‘Š using hierarchical softmax

layers

  • Recurrent NNLM models as much as twice reduction in perplexity as compared

to ๐‘œ-gram models

  • In practice recurrent NNLM models are much faster to train than feedforward

NNLM models

slide-14
SLIDE 14

Continuous Bag of Words

  • It is similar to feedforward NNLM with no hidden layer. This model only

consists of an input and an output layer

  • In this model, words in sequences from past and future are input and they

are trained to predict the current sample

  • Owing to its simplicity, this model can be trained on huge amount of data in

a small time as compared to other neural network models

  • This model actually does the current word estimation provided context or a

sentence.

slide-15
SLIDE 15

Continuous Bag of Words

๐‘ฅ๐‘œ+4 ๐‘ฅ๐‘œ+3 ๐‘ฅ๐‘œโˆ’1 ๐‘ฅ๐‘œโˆ’2 Lookup table of word vectors of size Average of input vectors โ‹ฎ Softmax function Input Projection Layer Output Layer T R A I N I N G C O R P U S ๐‘ฅ๐‘œ+2 ๐‘ฅ๐‘œ+1 ๐‘ฅ๐‘œโˆ’3 ๐‘ฅ๐‘œโˆ’4 ๐‘ฅ๐‘œ

slide-16
SLIDE 16

Continuous Skip-gram Model

  • This model is similar to continuous bag of words model, its just the roles are

reversed for input and output

  • Here model attempts to predict the words around the current word
  • Input layer consists of the word vector from single word, while multiple
  • utput layers are connected to input layer
slide-17
SLIDE 17

Continuous Skip-gram Model

Lookup table of word vectors of size Single word vector โ‹ฎ Softmax Input Projection Layer Output Layer T R A I N I N G C O R P U S ๐‘ฅ๐‘œ ๐‘ฅ๐‘œ+4 โ‹ฎ Softmax ๐‘ฅ๐‘œ+1 โ‹ฎ Softmax ๐‘ฅ๐‘œโˆ’4 โ‹ฎ

slide-18
SLIDE 18

Analyzing language models

  • Perplexity : A measurement of how well a language model is able to

adapt the underlying probability distribution of a model

  • Word error rate : Percentage of words misrecognized by the language

model

  • Semantic Analysis : Deriving semantic analogies of word pairs, filling the

sentence with most logical word choice etc. These kind of tests are especially used for measuring the performance of word vectors. For example : Berlin : Germany :: Toronto : Canada

  • Syntactic Analysis : For language model, it might be the construction of

syntactically correct parse tree, for testing word vectors one might look for predicting syntactic analogies such as : possibly : impossibly :: ethical : unethical

slide-19
SLIDE 19

Perplexity Comparison

Perplexity of different models tested on Brown Corpus

slide-20
SLIDE 20

Perplexity Comparison

Perplexity comparison of different models on Penn Treebank

slide-21
SLIDE 21

Sentence Completion Task

WSJ Kaldi Rescoring

slide-22
SLIDE 22

Semantic Syntactic Tests

slide-23
SLIDE 23

Results

slide-24
SLIDE 24

Results

Different models with 640 dimensional word vectors Training Time comparison of different models

slide-25
SLIDE 25

Microsoft Research Sentence Completion Challenge

slide-26
SLIDE 26

Complex Learned Relationships

slide-27
SLIDE 27

References

  • [1] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient

Estimation of Word Representations in Vector Space. In Proceedings of Workshop at ICLR, 2013

  • [2] Y. Bengio, R. Ducharme, P. Vincent. A neural probabilistic language
  • model. Journal of Machine Learning Research, 3:1137-1155, 2003
  • [3] T. Mikolov, J. Kopecky, L. Burget, O. Glembek and J. ยด Cernock ห‡ y. Neural

network based language models for highly inflective languages, In: Proc. ICASSP 2009