algorithms for nlp
play

Algorithms for NLP Word Embeddings Yulia Tsvetkov CMU Slides: Dan - PowerPoint PPT Presentation

Algorithms for NLP Word Embeddings Yulia Tsvetkov CMU Slides: Dan Jurafsky Stanford, Mike Peters AI2, Edouard Grave FAIR Brown Clustering 0 dog [0000] 0 1 cat [0001] 1 0 ant [001] 0 1 1 0 0 1 1 river [010] dog cat


  1. Algorithms for NLP Word Embeddings Yulia Tsvetkov – CMU Slides: Dan Jurafsky – Stanford, Mike Peters – AI2, Edouard Grave – FAIR

  2. Brown Clustering 0 dog [0000] 0 1 cat [0001] 1 0 ant [001] 0 1 1 0 0 1 1 river [010] dog cat ant river lake blue red lake [011] blue [10] red [11]

  3. Brown Clustering [Brown et al, 1992]

  4. Brown Clustering [ Miller et al., 2004]

  5. Brown Clustering ▪ is a vocabulary ▪ is a partition of the vocabulary into k clusters ▪ is a probability of cluster of w i to follow the cluster of w i-1 ▪ The model: Quality( C )

  6. Quality(C) Slide by Michael Collins

  7. A Naive Algorithm ▪ We start with | V | clusters: each word gets its own cluster ▪ Our aim is to find k final clusters ▪ We run | V | − k merge steps: ▪ At each merge step we pick two clusters c i and c j , and merge them into a single cluster ▪ We greedily pick merges such that Quality(C) for the clustering C after the merge step is maximized at each stage ▪ Cost? Naive = O(| V | 5 ). Improved algorithm gives O(| V | 3 ): still too slow for realistic values of | V | Slide by Michael Collins

  8. Brown Clustering Algorithm ▪ Parameter of the approach is m (e.g., m = 1000 ) ▪ Take the top m most frequent words, put each into its own cluster, c 1 , c 2 , … c m ▪ For i = (m + 1) … | V | ▪ Create a new cluster, c m+1 , for the i ’th most frequent word. We now have m + 1 clusters ▪ Choose two clusters from c 1 . . . c m+1 to be merged: pick the merge that gives a maximum value for Quality(C). We’re now back to m clusters ▪ Carry out (m − 1) final merges, to create a full hierarchy ▪ Running time: O(| V | m 2 + n ) where n is corpus length Slide by Michael Collins

  9. Plan for Today ▪ Word2Vec ▪ Representation is created by training a classifier to distinguish nearby and far-away words ▪ FastText ▪ Extension of word2vec to include subword information ▪ ELMo ▪ Contextual token embeddings ▪ Multilingual embeddings ▪ Using embeddings to study history and culture

  10. Word2Vec ▪ Popular embedding method ▪ Very fast to train ▪ Code available on the web ▪ Idea: predict rather than count

  11. Word2Vec [Mikolov et al.’ 13]

  12. Skip-gram Prediction ▪ Predict vs Count the cat sat on the mat

  13. Skip-gram Prediction ▪ Predict vs Count the cat sat on the mat w t-2 = <start -2 > w t-1 = <start -1 > w t+1 = cat w t = the CLASSIFIER w t+2 = sat context size = 2

  14. Skip-gram Prediction ▪ Predict vs Count the cat sat on the mat w t-2 = <start -1 > w t-1 = the w t+1 = sat w t = cat CLASSIFIER w t+2 = on context size = 2

  15. Skip-gram Prediction ▪ Predict vs Count the cat sat on the mat w t-2 = the w t-1 = cat w t+1 = on w t = sat CLASSIFIER w t+2 = the context size = 2

  16. Skip-gram Prediction ▪ Predict vs Count the cat sat on the mat w t-2 = cat w t-1 = sat w t+1 = the w t = on CLASSIFIER w t+2 = mat context size = 2

  17. Skip-gram Prediction ▪ Predict vs Count the cat sat on the mat w t-2 = sat w t-1 = on w t+1 = mat w t = the CLASSIFIER w t+2 = <end +1 > context size = 2

  18. Skip-gram Prediction ▪ Predict vs Count the cat sat on the mat w t-2 = on w t-1 = the w t+1 = <end +1 > w t = mat CLASSIFIER w t+2 = <end +2 > context size = 2

  19. Skip-gram Prediction ▪ Predict vs Count w t-2 = sat w t-1 = on w t+1 = mat w t = the CLASSIFIER w t+2 = <end +1 > w t-2 = <start -2 > w t-1 = <start -1 > w t+1 = cat w t = the CLASSIFIER w t+2 = sat

  20. Skip-gram Prediction

  21. Skip-gram Prediction ▪ Training data w t , w t-2 w t , w t-1 w t , w t+1 w t , w t+2 ...

  22. Skip-gram Prediction

  23. ▪ For each word in the corpus t= 1 … T Maximize the probability of any context window given the current center word

  24. Skip-gram Prediction ▪ Softmax

  25. SGNS ▪ Negative Sampling ▪ Treat the target word and a neighboring context word as positive examples. ▪ subsample very frequent words ▪ Randomly sample other words in the lexicon to get negative samples ▪ x2 negative samples Given a tuple (t,c) = target, context ▪ (cat, sat) ▪ (cat, aardvark)

  26. Learning the classifier ▪ Iterative process ▪ We’ll start with 0 or random weights ▪ Then adjust the word weights to ▪ make the positive pairs more likely ▪ and the negative pairs less likely ▪ over the entire training set: ▪ Train using gradient descent

  27. How to compute p(+|t,c)?

  28. SGNS Given a tuple (t,c) = target, context ▪ (cat, sat) ▪ (cat, aardvark) Return probability that c is a real context word:

  29. Choosing noise words Could pick w according to their unigram frequency P(w) More common to chosen then according to p α (w) α = ¾ works well because it gives rare noise words slightly higher probability To show this, imagine two events p(a)=.99 and p(b) = .01:

  30. Skip-gram Prediction

  31. FastText https://fasttext.cc/

  32. FastText: Motivation

  33. Subword Representation skiing = {^skiing$, ^ski, skii, kiin, iing, ing$}

  34. FastText

  35. Details ▪ n -grams between 3 and 6 characters ▪ how many possible ngrams? ▪ |character set| n ▪ Hashing to map n-grams to integers in 1 to K=2M ▪ get word vectors for out-of-vocabulary words using subwords. ▪ less than 2× slower than word2vec skipgram ▪ short n-grams (n = 4) are good to capture syntactic information ▪ longer n-grams (n = 6) are good to capture semantic information

  36. FastText Evaluation ▪ Intrinsic evaluation similarity similarity word1 word2 (humans) (embeddings) vanish disappear 9.8 1.1 behave obey 7.3 0.5 belief impression 5.95 0.3 muscle bone 3.65 1.7 modest flexible 0.98 0.98 hole agreement 0.3 0.3 ▪ Arabic, German, Spanish, Spearman's rho (human ranks, model ranks) French, Romanian, Russian

  37. FastText Evaluation [Grave et al, 2017]

  38. FastText Evaluation

  39. FastText Evaluation

  40. ELMo https://allennlp.org/elmo

  41. Motivation p(play | Elmo and Cookie Monster play a game .) ≠ p(play | The Broadway play premiered yesterday .)

  42. Background

  43. ?? LSTM LSTM LSTM LSTM LSTM LSTM The Broadway play premiered yesterday .

  44. ?? LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM The Broadway play premiered yesterday .

  45. ?? ?? LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM The Broadway play premiered yesterday .

  46. Embeddings from Language Models ELMo ?? = LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM The Broadway play premiered yesterday .

  47. Embeddings from Language Models ELMo = LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM The Broadway play premiered yesterday .

  48. Embeddings from Language Models ELMo = + + LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM The Broadway play premiered yesterday .

  49. Embeddings from Language Models ELMo ( ) ) = ) ( λ 2 + ( λ 0 λ 1 + LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM LSTM The Broadway play premiered yesterday .

  50. Evaluation: Extrinsic Tasks

  51. Stanford Question Answering Dataset (SQuAD) [Rajpurkar et al, ‘16, ‘18]

  52. SNLI [Bowman et al, ‘15]

  53. Multilingual Embeddings https://github.com/mfaruqui/crosslingual-cca http://128.2.220.95/multilingual/

  54. Motivation model 1 model 2 ?

  55. Motivation English French ?

  56. Canonical Correlation Analysis (CCA) Canonical Correlation Analysis (Hotelling, 1936) Projects two sets of vectors (of equal cardinality) in a space where they are maximally correlated. CCA Ω Ω ∑ ∑ Ω ∑

  57. Canonical Correlation Analysis (CCA) Ω ⊆ X, ∑ ⊆ Y W, V = CCA(Ω, ∑) d 1 k k d 2 V W x x X n 1 Y n 2 k k X’ Y’ n 1 n 2 k = min(r(Ω), r(∑)) X’ and Y’ are now maximally correlated. [Faruqui & Dyer, ‘14]

  58. Extension: Multilingual Embeddings French Spanish Arabic Swedish O french → english x O french ← english O french → english -1 58 French-E English nglish O french ← english [Ammar et al., ‘16]

  59. Embeddings can help study word history!

  60. Diachronic Embeddings Word vectors 1990 Word vectors for 1920 “dog” 1990 word vector “dog” 1920 word vector vs. 1950 2000 1900   ▪ count-based embeddings w/ PPMI 6 ▪ projected to a common space 0

  61. Project 300 dimensions down into 2 ~30 million books, 1850-1990, Google Books data

  62. Negative words change faster than positive words

  63. Embeddings reflect ethnic stereotypes over time

  64. Change in linguistic framing 1910-1990 Change in association of Chinese names with adjectives framed as "othering" (barbaric, monstrous, bizarre)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend