si425 nlp
play

SI425 : NLP Set 8 Words as Vectors (distributional similarity) - PowerPoint PPT Presentation

SI425 : NLP Set 8 Words as Vectors (distributional similarity) Fall 2020 : Chambers some slides adapted from Dan Jurafsky and Bill MacCartney Why are these so different? P( ball | threw, the) = 0.12 P( baseball | threw, the) = 0.01 P( ran |


  1. SI425 : NLP Set 8 Words as Vectors (distributional similarity) Fall 2020 : Chambers some slides adapted from Dan Jurafsky and Bill MacCartney

  2. Why are these so different? P( ball | threw, the) = 0.12 P( baseball | threw, the) = 0.01 P( ran | they) = 0.08 P( sprinted | they) = 0.0 Smoothing doesn’t solve this. We should make it similar to its synonym.

  3. Distributional methods • Firth (1957) “You shall know a word by the company it keeps!” • Example from Nida (1975) noted by Lin: A bottle of tezgüino is on the table Everybody likes tezgüino Tezgüino makes you tipsy We make tezgüino out of corn • Intuition : • Just from context, you can guess the meaning of tezgüino. • So we should look at surrounding contexts, and see what other words occur in similar context.

  4. Fill-in-the-blank on Google You can get a quick & dirty impression of what words show up in a given context with Google queries:

  5. Intuition • Define each word by a sparse vector • Use a vector distance metric between words • Declare two words similar if their distance is small

  6. Context vectors • Target word w • We have a boolean variable f i for each word v i in the vocabulary. • f i = “word v i occurs in the neighborhood of w ” w = (f 1 , f 2 , f 3 , …, f N ) If w = tezgüino , v 1 = bottle , v 2 = make , v 3 = matrix w = (1, 1, 0, …)

  7. Distributional similarity We need to define 3 things: 1. How the co-occurrence terms are defined • Vocabulary? N-Grams? 2. How terms are weighted • (Boolean? Frequency? Logs? Mutual information?) 3. What vector similarity metric should we use? • Euclidean distance? Cosine? Jaccard? Dice?

  8. 1. Defining co-occurrence vectors • Windows of neighboring words (n words to the left…) • Bag-of-words • We generally remove stop words • Con : we lose any sense of syntax • Solution: use the words occurring in particular grammatical relations

  9. Defining co-occurrence vectors “The meaning of entities, and the meaning of grammatical relations among them, is related to the restriction of combinations of these entitites relative to other entities.” Zellig Harris (1968) Idea : parse the sentence, extract grammar relations

  10. Vectors with grammatical dependencies For the word cell : vector of N * R features ( R is the number of grammar relations)

  11. Group Exercise • Search “Naval Academy” and create a vector. • What other school is most similar? Most different? • Compare vectors 11

  12. 2. Weighting the counts Common : use the frequency count of context as its value • or any function of this frequency Alternative : compute an association score Consider feature feat = “ dried” for target word = “tangerine” • P(feat | word) = count(feat, word) / count(word) • Assoc prob (word, feat) = P(feat | word) •

  13. Frequency-based problems Objects of the verb drink : Water 7 Champagne 4 It 3 Much 3 Anything 3 Liquid 2 Wine 2 Problem : “drink it” is more common than “drink wine” ! • (“wine” is a better drinkable thing than “it”) Need : We need to control for expected frequency • Solution: normalize by the expected frequency •

  14. Weighting: Mutual Information Pointwise mutual information : measure of how often • two events x and y occur, compared with what we would expect if they were independent: P ( x , y ) PMI ( x , y ) = log 2 P ( x ) P ( y ) PMI between a target word w and a feature f : • P ( w , f ) assoc PMI ( w , f ) = log 2 P ( w ) P ( f )

  15. Mutual information intuition Objects of the verb drink

  16. Summary: weightings • See Manning and Schuetze (1999) for more

  17. 3. Defining vector similarity

  18. Summary of similarity measures

  19. Evaluating similarity measures • Intrinsic evaluation • Correlation with word similarity ratings from humans • Extrinsic (task-based, end-to-end) evaluation • Malapropism (spelling error) detection • WSD • Essay grading • Plagiarism detection • Taking TOEFL multiple-choice vocabulary tests • Language modeling in some application

  20. An example of detected plagiarism

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend