vector semantics
play

Vector Semantics Natural Language Processing Lecture 17 Adapted - PowerPoint PPT Presentation

Vector Semantics Natural Language Processing Lecture 17 Adapted from Jurafsky and Martnn v3 Why vector models of meaning? computng the similarity between words fast is similar to rapid tall is similar to height


  1. Vector Semantics Natural Language Processing Lecture 17 Adapted from Jurafsky and Martnn v3

  2. Why vector models of meaning? computng the similarity between words “ fast ” is similar to “ rapid ” “ tall ” is similar to “ height ” Queston answering: Q: “How tall is Mt. Everest?” Candidate A: “The offial height of Mount Everest is 29029 feet” 2

  3. Word similarity for plagiarism detecton

  4. Word similarity for historical linguistcs: semantc change over tme Kulkarnin Al-Rfoun Perozzin Skiena 2015 Sagin Kaufmann Clark 2013 45 40 <1250 Semantc Broadening Middle 1350-1500 35 Modern 1500-1710 30 25 20 15 10 5 0 dog deer hound 4

  5. Problems with thesaurus-based meaning • We don’t have a thesaurus for every language • We can’t have a thesaurus for every year • For historical linguistcsn we need to compare word meanings in year t to year t+1 • Thesauruses have problems with recall Many words and phrases are missing • Thesauri work less well for verbsn adjectves •

  6. Distributonal models of meaning = vector-space models of meaning = vector semantcs Intuitons : Zellig Harris (1954): “oculist and eye-doctor … occur in almost the same • environments” “If A and B have almost identcal environments we say that • they are synonyms.” Firth (1957): “You shall know a word by the company it keeps!” • 6

  7. Intuiton of distributonal word similarity • Nida example: Suppose I asked you what is tesgüino ? A bottle of tesgüino is on the table Everybody likes tesgüino Tesgüino makes you drunk We make tesgüino out of corn. • From context words humans can guess tesgüino means • an alcoholic beverage like beer • Intuiton for algorithm: • Two words are similar if they have similar word contexts.

  8. Four kinds of vector models Sparse vector representatons 1. Mutual-informaton weighted word co-occurrence matrices Dense vector representatons: 2. Singular value decompositon (and Latent Semantc Analysis) 3. Neural-network-inspired models (skip-gramsn CBOW) 4. Brown clusters 8

  9. Shared intuiton • Model the meaning of a word by “embedding” in a vector space. • The meaning of a word is a vector of numbers • Vector models are also called “ embeddings ”. • Contrast: word meaning is represented in many computatonal linguistc applicatons by a vocabulary index (“word number 545”) • Old philosophy joke: Q: What’s the meaning of life? A: LIFE’ 9

  10. Vector Semantics Words and co-occurrence vectors

  11. Co-occurrence Matrices We represent how ofen a word occurs in a • document Term-document matrix • Or how ofen a word occurs with another • Term-term matrix • (or word-word co-occurrence matrix or word-context matrix ) 11

  12. Term-document matrix • Each cell: count of word w in a document d : Each document is a count vector in ℕ v : a column below • As You Like It Twelfth Night Julius Caesar Henry V battle 1 1 8 15 soldier 2 2 12 36 fool 37 58 1 5 clown 6 117 0 0 12

  13. Similarity in term-document matrices Two documents are similar if their vectors are similar As You Like It Twelfth Night Julius Caesar Henry V battle 1 1 8 15 soldier 2 2 12 36 fool 37 58 1 5 clown 6 117 0 0 13

  14. The words in a term-document matrix • Each word is a count vector in ℕ D : a row below As You Like It Twelfth Night Julius Caesar Henry V battle 1 1 8 15 soldier 2 2 12 36 fool 37 58 1 5 clown 6 117 0 0 14

  15. The words in a term-document matrix • Two words are similar if their vectors are similar As You Like It Twelfth Night Julius Caesar Henry V battle 1 1 8 15 soldier 2 2 12 36 fool 37 58 1 5 clown 6 117 0 0 15

  16. The word-word or word-context matrix • Instead of entre documentsn use smaller contexts Paragraph • Window of 4 words • • A word is now defned by a vector over counts of context words • Instead of each vector being of length Dn each vector is now of length |V| • The word-word matrix is |V|x|V| 16

  17. Word-Word matrix Sample contexts 7 words aardvark computer data pinch result sugar … apricot 0 0 0 1 0 1 pineapple 0 0 0 1 0 1 digital 0 2 1 0 1 0 information 0 1 6 0 4 0 … …

  18. Word-word matrix • We showed only 4x6n but the real matrix is 50n000 x 50n000 • So it’s very sparse • Most values are 0. • That’s OKn since there are lots of efcient algorithms for sparse matrices. • The size of windows depends on your goals • The shorter the windows n the more syntactc the representaton 1-3 very syntactcy • The longer the windowsn the more semantc the representaton 4-10 more semantcy 18

  19. 2 kinds of co-occurrence between 2 words (Schütze and Pedersen, 1993) • First-order co-occurrence ( syntagmatc associaton ): They are typically nearby each other. • wrote is a frst-order associate of book or poem . • • Second-order co-occurrence ( paradigmatc associaton ): They have similar neighbors. • wrote is a second- order associate of words like said or • remarked . 19

  20. Vector Semantics Positve Pointwise Mutual Informaton (PPMI)

  21. Problem with raw counts • Raw word frequency is not a great measure of associaton between words • It’s very skewed • “the” and “of” are very frequentn but maybe not the most discriminatve • We’d rather have a measure that asks whether a context word is partcularly informatve about the target word. Positve Pointwise Mutual Informaton (PPMI) • 21

  22. Pointwise Mutual Informaton Pointwise mutual informaton : Do events x and y co-occur more than if they were independent? P ( x , y ) PMI( X , Y ) = log 2 P ( x ) P ( y ) PMI between two words : (Church & Hanks 1989) Do words x and y co-occur more than if they were independent?

  23. Positve Pointwise Mutual Informaton •

  24. Computng PPMI on a term-context matrix • Matrix F with W rows (words) and C columns (contexts) • f ij is # of tmes w i occurs in context c j C W f ij f ij å å f ij p i = j = p 1 ij = 1 p * j = i * = W C W C W C f ij å å f ij å å f ij å å i = j = 1 1 i = 1 j = 1 i = 1 j = 1 ì pm i ij if pm i ij > 0 p ï ij ppm i ij = pm i ij = log 2 í p i * p 0 otherwise ï î * j 24

  25. f ij p ij = W C f ij å å i = j = 1 1 W C f ij f ij å å p(w=informatonnc=data) = 6/19 = .32 j = 1 p ( c j ) = i = 1 p ( w i ) = p(w=informaton) = 11/19 = .58 N N p(c=data) = 7/19 = .37 p (w ,c on te x t) p (w ) computer data pinch result sugar apricot 0.00 0.00 0.05 0.00 0.05 0.11 pineapple 0.00 0.00 0.05 0.00 0.05 0.11 digital 0.11 0.05 0.00 0.05 0.00 0.21 information 0.05 0.32 0.00 0.21 0.00 0.58 25 p (c on te x t) 0.16 0.37 0.11 0.26 0.11

  26. p (w ,c on te x t) p (w ) computer data pinch result sugar p apricot 0.00 0.00 0.05 0.00 0.05 0.11 ij pm i ij = log 2 pineapple 0.00 0.00 0.05 0.00 0.05 0.11 p i * p * j digital 0.11 0.05 0.00 0.05 0.00 0.21 information 0.05 0.32 0.00 0.21 0.00 0.58 p (c on te x t) 0.16 0.37 0.11 0.26 0.11 • pmi(informatonndata) = log 2 (.32 / (.37*.58) ) = .58 (.57 using full prefision) P P MI(w ,c on te x t) computer data pinch result sugar apricot - - 2.25 - 2.25 pineapple - - 2.25 - 2.25 digital 1.66 0.00 - 0.00 - information 0.00 0.57 - 0.47 - 26

  27. Weightng PMI PMI is biased toward infrequent events • Very rare words have very high PMI values • Two solutons: • Give rare words slightly higher probabilites • Use add-one smoothing (which has a similar efect) • 27

  28. Weightng PMI: Giving rare context words slightly higher probability • 28

  29. Use Laplace (add-k) smoothing A d d -2S m oo th e dC ou n t(w ,c on te x t) computer data pinch result sugar apricot 2 2 3 2 3 pineapple 2 2 3 2 3 digital 4 3 2 3 2 information 3 8 2 6 2 p (w ,c on te x t)[a d d -2 ] p ( w ) computer data pinch result sugar apricot 0.03 0.03 0.05 0.03 0.05 0.20 pineapple 0.03 0.03 0.05 0.03 0.05 0.20 digital 0.07 0.05 0.03 0.05 0.03 0.24 information 0.05 0.14 0.03 0.10 0.03 0.36 p (c on te x t) 0.19 0.25 0.17 0.22 0.17 29

  30. PPMI versus add-2 smoothed PPMI P P MI(w ,c on te x t) computer data pinch result sugar apricot - - 2.25 - 2.25 pineapple - - 2.25 - 2.25 digital 1.66 0.00 - 0.00 - information 0.00 0.57 - 0.47 - P P MI(w ,c on te x t)[a d d -2 ] computer data pinch result sugar apricot 0.00 0.00 0.56 0.00 0.56 pineapple 0.00 0.00 0.56 0.00 0.56 digital 0.62 0.00 0.00 0.00 0.00 information 0.00 0.58 0.00 0.37 0.00 30

  31. Vector Semantics Measuring similarity: the cosine

  32. Measuring similarity • Given 2 target words v and w • We’ll need a way to measure their similarity. • Most measure of vectors similarity are based on the: • Dot product or inner product from linear algebra • High when two vectors have large values in same dimensions. • Low (in fact 0) for orthogonal vectors with zeros in complementary distributon 32

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend