more distributional
play

More Distributional Semantics: New Models & Applications CMSC - PowerPoint PPT Presentation

More Distributional Semantics: New Models & Applications CMSC 723 / LING 723 / INST 725 M ARINE C ARPUAT marine@cs.umd.edu Last week Q: what is understanding meaning? A: meaning is knowing when words are similar or not


  1. More Distributional Semantics: New Models & Applications CMSC 723 / LING 723 / INST 725 M ARINE C ARPUAT marine@cs.umd.edu

  2. Last week… • Q: what is understanding meaning? • A: meaning is knowing when words are similar or not • Topics – Word similarity – Thesaurus-based methods – Distributional word representations – Dimensionality reduction

  3. T oday New models for learning word • representations From “count” -based models (e.g., LSA) • to “prediction” -based models (e.g., word2vec) • … and back • Beyond semantic similarity • Learning semantic relations between words •

  4. DI DISTR TRIBU IBUTIO TIONAL NAL MO MODE DELS OF OF WO WORD ME D MEAN ANING NG

  5. Distributional Approaches: Intuition “You shall know a word by the company it keeps!” (Firth, 1957) “ Differences of meaning correlates with differences of distribution” (Harris, 1970)

  6. Context Features • Word co-occurrence within a window: • Grammatical relations:

  7. Association Metric • Commonly-used metric: Pointwise Mutual Information P ( w , f )  associatio n ( w , f ) log PMI 2 P ( w ) P ( f ) • Can be used as a feature value or by itself

  8. Computing Similarity • Semantic similarity boils down to computing some measure on context vectors • Cosine distance: borrowed from information retrieval    N   v w   v w    i i i 1 sim ( v , w )   cosine   v w N N 2 2 v w   i i 1 1 i i

  9. Dimensionality Reduction with Latent Semantic Analysis

  10. NE NEW DI DIRECT CTIONS IONS: PR PREDIC DICT T VS. COU OUNT NT MOD MODELS

  11. Word vectors as a byproduct of language modeling A neural probabilistic Language Model. Bengio et al. JMLR 2003

  12. Using neural word representations in NLP • word representations from neural LMs – aka distributed word representations – aka word embeddings • How would you use these word vectors? • Turian et al. [2010] – word representations as features consistently improve performance of • Named-Entity Recognition • Text chunking tasks

  13. Word2vec [Mikolov et al. 2013] introduces simpler models https://code.google.com/p/word2vec

  14. Word2vec claims Useful representations for NLP applications Can discover relations between words using vector arithmetic king – male + female = queen Paper+tool received lots of attention even outside the NLP research community try it out at “word2vec playground”: http://deeplearner.fz-qqq.net/

  15. Demystifying the skip-gram model [Levy & Goldberg, 2014] Word context word embeddings Learn word vector parameters so as to maximize the probability of training set D Expensive!! http://www.cs.bgu.ac.il/~yoavg/publications/negative-sampling.pdf

  16. T oward the training objective for skip-gram Problem: trivial solution when Vc=Vw and Vc.Vw = K for all Vc,Vw, with a large K http://www.cs.bgu.ac.il/~yoavg/publications/negative-sampling.pdf

  17. Final training objective Word context pairs not observed in data D’ (negative sampling) Word context pairs (artificially generated) observed in data D http://www.cs.bgu.ac.il/~yoavg/publications/negative-sampling.pdf

  18. Skip-gram model [Mikolov et al. 2013] Predict context words given current word (ie 2(n-1) classifiers for context window of size n) Use negative samples at each position

  19. Don’t count, predict! [Baroni et al. 2014] “This paper has presented the first systematic comparative evaluation of count and predict vectors. As seasoned distributional semanticists with thorough experience in developing and using count vectors, we set out to conduct this study because we were annoyed by the triumphalist overtones surrounding predict models, despite the almost complete lack of a proper comparison to count vectors.”

  20. Don’t count, predict! [Baroni et al. 2014] “Our secret wish was to discover that it is all hype, and count vectors are far superior to their predictive counterparts. […] Instead, we found that the predict models are so good that, while the triumphalist overtones still sound excessive, there are very good reasons to switch to the new architecture.”

  21. Why does word2vec produce good word representations? Levy & Goldberg, Apr 2014: “Good question. We don’t really know. The distributional hypothesis states that words in similar contexts have similar meanings. The objective above clearly tries to increase the quantity v_w.v_c for good word-context pairs, and decrease it for bad ones. Intuitively, this means that words that share many contexts will be similar to each other […]. This is, however, very hand-wavy .”

  22. Learning skip-gram is almost equivalent to matrix factorization [Levy & Goldberg 2014] http://www.cs.bgu.ac.il/~yoavg/publications/nips2014pmi.pdf

  23. New directions: Summary • There are alternative ways to learn distributional representations for word meaning • Understanding >> Magic

  24. BEYOND SIMILARITY PR PREDIC DICTING TING SEMAN MANTIC TIC RELATIO TIONS NS BE BETWE WEEN EN WO WORDS DS Slides credit: Peter Turney

  25. Recognizing T extual Entailment • Sample problem – Text iTunes software has seen strong sales in Europe – Hypothesis Strong sales for iTunes in Europe – Task: Does Text entails Hypothesis? Yes or No?

  26. Recognizing T extual Entailment • Sample problem – Task: Does Text entails Hypothesis? Yes or No? • Has emerged as a core task for semantic analysis in NLP – subsumes many tasks: Paraphrase Detection, Question Answering, etc. – fully text based: does not require committing to a specific semantic representation [Dagan et al. 2013]

  27. Recognizing lexical entailment • To recognize entailment between sentences, we must first recognize entailment between words • Sample problem – Text George was bitten by a dog – Hypothesis George was attacked by an animal

  28. Lexical entailment & semantic relations • Synonymy: synonyms entail each other firm entails company • is-a relations: hyponyms entail hypernyms automaker entails company • part-whole relations: it depends government entails minister division does not entail company • entailment also covers other relations ocean entails water murder entails death

  29. • We know how to build word vectors that represent word meaning • How can we predict entailment using these vectors?

  30. Approach 1: context inclusion hypothesis • Hypothesis: – if a word a tends to occur in subset of the contexts in which a word b occur (b contextually includes a) – then a (the narrower term) tends to entail b (the broader term) • Inspired by formal logic • In practice – Design an asymmetric real-valued metric to compare word vectors [Kotlerman, Dagan, et al. 2010]

  31. Approach 1: the BalAPinc Metric Complex hand- crafted metric!

  32. Approach 2: context combination hypothesis • Hypothesis: – The tendence of word a to entail word b is correlated with some learnable function of the contexts in which a occurs, and the contexts in which b occurs – Some combination of contexts tend to block entailment, others tend to allow entailment • In practice – Binary prediction task – Supervised learning from labeled word pairs [Baroni, Bernardini, Do and Shan, 2012]

  33. Approach 3: similarity differences hypothesis • Hypothesis – The tendency of a to entail b is correlated with some learnable function of the differences in their similarities, sim(a,r) – sim(b,r), to a set of reference words r in R – Some differences tend to block entailment, and others tend to allow entailment • In practice – Binary prediction task – Supervised learning from labeled word pairs + reference words [Turney & Mohammad 2015]

  34. Approach 3: similarity differences hypothesis

  35. Evaluation: test set 1/3 (KDSZ)

  36. Evaluation: test set 2/3 (JMTH)

  37. Evaluation: test set 3/3 (BBDS)

  38. Evaluation [Turney & Mohammad 2015]

  39. Lessons from lexical entailment task • Distributional hypothesis can be refined and put to use in various ways • to detect relations between words beyond • concept of similarity • Combination of unsupervised similarity+ supervised learning is powerful

  40. RECAP AP

  41. Today A glimpse into recent research • New models for learning word • representations From “count” -based models (e.g., LSA) • to “prediction” -based models (e.g., word2vec) • … and back • Beyond semantic similarity • Learning lexical entailment • Next topics multiword expressions & predicate argument • structure

  42. References Don’t count, predict! [ Baroni et al. 2014] http://clic.cimec.unitn.it/marco/publications/acl2014/baroni-etal- countpredict-acl2014.pdf Word2vec explained [Goldberg & Levy 2014] http://www.cs.bgu.ac.il/~yoavg/publications/negative-sampling.pdf Neural Word Embeddings as Implicit Matrix Factorization [Levy & Goldberg 2014] http://www.cs.bgu.ac.il/~yoavg/publications/nips2014pmi.pdf Experiments with Three Approaches to Recognizing Lexical Entailment [Turney & Mohammad 2015] http://arxiv.org/abs/1401.8269

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend