neural lms
play

Neural LMs Image: (Bengio et al, 03) One Hot Vectors Neural LMs - PowerPoint PPT Presentation

Neural LMs Image: (Bengio et al, 03) One Hot Vectors Neural LMs (Bengio et al, 03) Low-dimensional Representations Learning representations by back-propagating errors Rumelhart, Hinton & Williams, 1986 A neural


  1. Brown Clustering ▪ is a vocabulary ▪ is a partition of the vocabulary into k clusters ▪ is a probability of cluster of w i to follow the cluster of w i-1 ▪ The model: Quality( C )

  2. Quality(C) Slide by Michael Collins

  3. A Naive Algorithm ▪ We start with | V | clusters: each word gets its own cluster ▪ Our aim is to find k final clusters ▪ We run | V | − k merge steps: ▪ At each merge step we pick two clusters c i and c j , and merge them into a single cluster ▪ We greedily pick merges such that Quality(C) for the clustering C after the merge step is maximized at each stage ▪ Cost? Naive = O(| V | 5 ). Improved algorithm gives O(| V | 3 ): still too slow for realistic values of | V | Slide by Michael Collins

  4. Brown Clustering Algorithm ▪ Parameter of the approach is m (e.g., m = 1000 ) ▪ Take the top m most frequent words, put each into its own cluster, c 1 , c 2 , … c m ▪ For i = (m + 1) … | V | ▪ Create a new cluster, c m+1 , for the i ’th most frequent word. We now have m + 1 clusters ▪ Choose two clusters from c 1 . . . c m+1 to be merged: pick the merge that gives a maximum value for Quality(C). We’re now back to m clusters ▪ Carry out (m − 1) final merges, to create a full hierarchy ▪ Running time: O(| V | m 2 + n ) where n is corpus length Slide by Michael Collins

  5. Word embedding representations ▪ Count-based ▪ tf-idf, PPMI ▪ Class-based ▪ Brown clusters ▪ Distributed prediction-based (type) embeddings ▪ Word2Vec, Fasttext ▪ Distributed contextual (token) embeddings from language models ▪ ELMo, BERT ▪ + many more variants ▪ Multilingual embeddings ▪ Multisense embeddings ▪ Syntactic embeddings ▪ etc. etc.

  6. Word2Vec ▪ Popular embedding method ▪ Very fast to train ▪ Code available on the web ▪ Idea: predict rather than count

  7. Word2Vec [Mikolov et al.’ 13]

  8. Skip-gram Prediction ▪ Predict vs Count the cat sat on the mat

  9. Skip-gram Prediction ▪ Predict vs Count the cat sat on the mat w t-2 = <start -2 > w t-1 = <start -1 > w t+1 = cat w t = the CLASSIFIER w t+2 = sat context size = 2

  10. Skip-gram Prediction ▪ Predict vs Count the cat sat on the mat w t-2 = <start -1 > w t-1 = the w t+1 = sat w t = cat CLASSIFIER w t+2 = on context size = 2

  11. Skip-gram Prediction ▪ Predict vs Count the cat sat on the mat w t-2 = the w t-1 = cat w t+1 = on w t = sat CLASSIFIER w t+2 = the context size = 2

  12. Skip-gram Prediction ▪ Predict vs Count the cat sat on the mat w t-2 = cat w t-1 = sat w t+1 = the w t = on CLASSIFIER w t+2 = mat context size = 2

  13. Skip-gram Prediction ▪ Predict vs Count the cat sat on the mat w t-2 = sat w t-1 = on w t+1 = mat w t = the CLASSIFIER w t+2 = <end +1 > context size = 2

  14. Skip-gram Prediction ▪ Predict vs Count the cat sat on the mat w t-2 = on w t-1 = the w t+1 = <end +1 > w t = mat CLASSIFIER w t+2 = <end +2 > context size = 2

  15. Skip-gram Prediction ▪ Predict vs Count w t-2 = sat w t-1 = on w t+1 = mat w t = the CLASSIFIER w t+2 = <end +1 > w t-2 = <start -2 > w t-1 = <start -1 > w t+1 = cat w t = the CLASSIFIER w t+2 = sat

  16. Skip-gram Prediction

  17. Skip-gram Prediction ▪ Training data w t , w t-2 w t , w t-1 w t , w t+1 w t , w t+2 ...

  18. Skip-gram Prediction

  19. Objective ▪ For each word in the corpus t= 1 … T Maximize the probability of any context window given the current center word

  20. Skip-gram Prediction ▪ Softmax

  21. SGNS ▪ Negative Sampling ▪ Treat the target word and a neighboring context word as positive examples. ▪ subsample very frequent words ▪ Randomly sample other words in the lexicon to get negative samples ▪ x2 negative samples Given a tuple (t,c) = target, context ▪ (cat, sat) ▪ (cat, aardvark)

  22. Choosing noise words Could pick w according to their unigram frequency P(w) More common to chosen then according to p α (w) α = ¾ works well because it gives rare noise words slightly higher probability To show this, imagine two events p(a)=.99 and p(b) = .01:

  23. How to compute p(+|t,c)?

  24. SGNS Given a tuple (t,c) = target, context ▪ (cat, sat) ▪ (cat, aardvark) Return probability that c is a real context word:

  25. Learning the classifier ▪ Iterative process ▪ We’ll start with 0 or random weights ▪ Then adjust the word weights to ▪ make the positive pairs more likely ▪ and the negative pairs less likely ▪ over the entire training set: ▪ Train using gradient descent

  26. Skip-gram Prediction

  27. FastText https://fasttext.cc/

  28. FastText: Motivation

  29. Subword Representation skiing = {^skiing$, ^ski, skii, kiin, iing, ing$}

  30. FastText

  31. Details ▪ how many possible ngrams? ▪ |character set| n ▪ Hashing to map n-grams to integers in 1 to K=2M ▪ get word vectors for out-of-vocabulary words using subwords. ▪ less than 2× slower than word2vec skipgram ▪ n -grams between 3 and 6 characters ▪ short n-grams (n = 4) are good to capture syntactic information ▪ longer n-grams (n = 6) are good to capture semantic information

  32. FastText Evaluation ▪ Intrinsic evaluation similarity similarity word1 word2 (humans) (embeddings) vanish disappear 9.8 1.1 behave obey 7.3 0.5 belief impression 5.95 0.3 muscle bone 3.65 1.7 modest flexible 0.98 0.98 hole agreement 0.3 0.3 ▪ Arabic, German, Spanish, Spearman's rho (human ranks, model ranks) French, Romanian, Russian

  33. FastText Evaluation [Grave et al, 2017]

  34. FastText Evaluation

  35. FastText Evaluation

  36. Dense Embeddings You Can Download Word2vec (Mikolov et al.’ 13) https://code.google.com/archive/p/word2vec/ Fasttext (Bojanowski et al.’ 17) http://www.fasttext.cc/ Glove (Pennington et al., 14) http://nlp.stanford.edu/projects/glove/

  37. Word embedding representations ▪ Count-based ▪ tf-idf, PPMI ▪ Class-based ▪ Brown clusters ▪ Distributed prediction-based (type) embeddings ▪ Word2Vec, Fasttext ▪ Distributed contextual (token) embeddings from language models ▪ ELMo, BERT ▪ + many more variants ▪ Multilingual embeddings ▪ Multisense embeddings ▪ Syntactic embeddings ▪ etc. etc.

  38. Motivation p(play | Elmo and Cookie Monster play a game .) ≠ p(play | The Broadway play premiered yesterday .)

  39. ELMo https://allennlp.org/elmo

  40. Background

  41. ?? LSTM LSTM LSTM LSTM LSTM LSTM The Broadway play premiered yesterday .

  42. ?? LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM The Broadway play premiered yesterday .

  43. ?? ?? LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM The Broadway play premiered yesterday .

  44. Embeddings from Language Models ELMo ?? = LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM The Broadway play premiered yesterday .

  45. Embeddings from Language Models ELMo = LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM The Broadway play premiered yesterday .

  46. Embeddings from Language Models ELMo = + + LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM The Broadway play premiered yesterday .

  47. Embeddings from Language Models ELMo ( ) ) = ) ( λ 2 + ( λ 0 λ 1 + LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM The Broadway play premiered yesterday .

  48. Evaluation: Extrinsic Tasks

  49. Stanford Question Answering Dataset (SQuAD) [Rajpurkar et al, ‘16, ‘18]

  50. SNLI [Bowman et al, ‘15]

  51. BERT https://ai.googleblog.com/2018/11/open-sourcing-bert-state-of-art-pre.html

  52. Cloze task objective

  53. https://rajpurkar.github.io/SQuAD-explorer/

  54. Multilingual Embeddings https://github.com/mfaruqui/crosslingual-cca http://128.2.220.95/multilingual/

  55. Motivation ▪ comparison of words trained with different model 1 model 2 models ?

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend