data intensive distributed computing
play

Data-Intensive Distributed Computing CS 431/631 451/651 (Fall 2020) - PDF document

Data-Intensive Distributed Computing CS 431/631 451/651 (Fall 2020) Part 4: Analyzing Text (1/2) Ali Abedi These slides are available at https://www.student.cs.uwaterloo.ca/~cs451 This work is licensed under a Creative Commons


  1. Data-Intensive Distributed Computing CS 431/631 451/651 (Fall 2020) Part 4: Analyzing Text (1/2) Ali Abedi These slides are available at https://www.student.cs.uwaterloo.ca/~cs451 This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States See http://creativecommons.org/licenses/by-nc-sa/3.0/us/ for details 1

  2. Structure of the Course “ Core ” framework features and algorithm design MapReduce, Apache Hadoop, Apache Spark 2

  3. Structure of the Course Analyzing Graphs Relational Data Analyzing Text Data Mining Analyzing “ Core ” framework features and algorithm design 3

  4. Pairs. Stripes. Seems pretty trivial … More than a “ toy problem ” ? Answer: language models 4

  5. Language Models Assigning a probability to a sentence Why? • Machine translation • P(High winds tonight) > P(Large winds tonight) • Spell Correction • P(Waterloo is a great city) > P(Waterloo is a grate city) • Speech recognition • P (I saw a van) > P(eyes awe of an) Slide: from Dan Jurafsky Sentence with T words - assign a probability to it It has may applications in natural language processing.

  6. Language Models [chain rule] P(“Waterloo is a great city”) = P(Waterloo) x P(is | Waterloo) x P(a | Waterloo is) x P(great | Waterloo is a) x P(city | Waterloo is a great) Is this tractable? Sentence with T words - assign a probability to it P(A,B) = P(B) P(A|B) It ’ s becoming to complicated, let ’ s simplify it.

  7. Approximating Probabilities: N -Grams Basic idea: limit history to fixed number of ( N – 1) words (Markov Assumption) N =1: Unigram Language Model N-Grams are different levels of simplification of the chain rule. Unigram is the simplest model, hence the most inaccurate. 7

  8. Approximating Probabilities: N -Grams Basic idea: limit history to fixed number of ( N – 1) words (Markov Assumption) N =2: Bigram Language Model Since we also want to include the first word in the bigram model, we need a dummy beginning of sentence marker <s>. We usually also have an end of sentence marker but for the sake of brevity, I don ’ t show that here.

  9. Approximating Probabilities: N -Grams Basic idea: limit history to fixed number of ( N – 1) words (Markov Assumption) N =3: Trigram Language Model 9

  10. Building N -Gram Language Models Compute maximum likelihood estimates (MLE) for Individual n -gram probabilities Unigram Bigram Generalizes to higher-order n-grams State of the art models use ~5-grams We already know how to do this in MapReduce!

  11. Estimating Probability Distribution Sparsity problem Let ’ s now see how we can use these models and what problems they have. 11

  12. Example: Bigram Language Model <s> I am Sam </s> <s> Sam I am </s> <s> I do not like green eggs and ham </s> Training Corpus P( I | <s> ) = 2/3 = 0.67 P( Sam | <s> ) = 1/3 = 0.33 P( am | I ) = 2/3 = 0.67 P( do | I ) = 1/3 = 0.33 P( </s> | Sam )= 1/2 = 0.50 P( Sam | am) = 1/2 = 0.50 ... Bigram Probability Estimates Note: We don’t ever cross sentence boundaries 12

  13. Data Sparsity P( I | <s> ) = 2/3 = 0.67 P( Sam | <s> ) = 1/3 = 0.33 P( am | I ) = 2/3 = 0.67 P( do | I ) = 1/3 = 0.33 P( </s> | Sam )= 1/2 = 0.50 P( Sam | am) = 1/2 = 0.50 ... Bigram Probability Estimates P(I like ham) = P( I | <s> ) P( like | I ) P( ham | like ) P( </s> | ham ) = 0 Issue: Sparsity! Why is the 0 bad ?

  14. Solution: Smoothing Zeros are bad for any statistical estimator Need better estimators because MLEs give us a lot of zeros A distribution without zeros is “smoother” The Robin Hood Philosophy: Take from the rich (seen n -grams) and give to the poor (unseen n -grams) Need better estimators because MLEs give us a lot of zeros A distribution without zeros is “smoother” Lots of techniques: Laplace, Good-Turing, Katz backoff, Jelinek-Mercer Kneser-Ney represents best practice 14

  15. Laplace Smoothing Simplest and oldest smoothing technique Just add 1 to all n -gram counts including the unseen ones So, what do the revised estimates look like? 15

  16. Laplace Smoothing Unigrams Bigrams

  17. Other variations … Many smoothing algorithms use this general representation For example: Kneser-Ney 17

  18. Interesting intuition behind Kneser-Ney smoothing Let ’ s complete this sentence I cannot find my reading … I cannot find my reading francisco Problem: “ francisco ” appears more frequently than “ glasses ” . Unigram probability is misleading here. Solution: “ francisco ” only appears after “ san ” ! Instead of unigram probability use the number of contexts franscisco appears in. 18

  19. Stupid Backoff Let’s break all the rules: But throw lots of data at the problem! Source: Brants et al. (EMNLP 2007) 19

  20. Stupid Backoff Implementation Straightforward approach: count each order separately A B remember this value A B C S(C|A B) = f(A B C)/f(A B) A B D S(D|A B) = f(A B D)/f(A B) A B E S(E|A B) = f(A B E)/f(A B) … … More clever approach: count all orders together A B remember this value A B C remember this value A B C P A B C Q A B D remember this value A B D X A B D Y … 20

  21. Kneser-Ney (KN) and Stupid Backoff (SB) KN fails to train on 1.8TB dataset (in reasonable time). 21

  22. Kneser-Ney (KN) and Stupid Backoff (SB) Translation accuracy vs training data size. SB outperforms KN when the training size is big enough. KN fails to train on big datasets. 22

  23. Search! Source: http://www.flickr.com/photos/guvnah/7861418602/ 23

  24. The Central Problem in Search Author Searcher Concepts Concepts Query Terms Document Terms “ tragic love story ” “ fateful star-crossed romance ” Do these represent the same concepts? Why is IR hard? Because language is hard! 24

  25. Abstract IR Architecture Query Documents online offline Representation Representation Function Function Query Representation Document Representation Comparison Index Function Hits 25

  26. How do we represent text? Remember: computers don’t “understand” anything! “ Bag of words ” Treat all the words in a document as index terms Assign a “ weight ” to each term based on “ importance ” (or, in simplest case, presence/absence of word) Disregard order, structure, meaning, etc. of the words Simple, yet effective! Assumptions Term occurrence is independent Document relevance is independent “Words” are well -defined 26

  27. What’s a word? 天主教教宗若望保祿二世因感冒再度住進醫院。 這是他今年第二度因同樣的病因住院。 لاقوكرامفيجير - قطانلامساب ةيجراخلاةيليئارسلئا - نإنوراشلبق ةوعدلاموقيسوةرمللىلولؤاةرايزب سنوت،يتلاتناكةرتفلةليوطرقملا يمسرلاةمظنملريرحتلاةينيطسلفلادعباهجورخنمنانبلماع 1982. Выступая в Мещанском суде Москвы экс - глава ЮКОСа заявил не совершал ничего противозаконного, в чем обвиняет его генпрокуратура России. भारत सरकार ने आरॎथिक सर्शेक्सण मेः रॎर्शत्थीय र्शरॎि 2005-06 मेः सात फीसदी रॎर्शकास दर हारॎसल करने का आकलन रॎकया है और कर सुधार पर ज़ौर रॎदया है 日米連合で台頭中国に対処 … アーミテージ前副長官提言 조재영 기자 = 서울시는 25 일 이명박 시장이 ` 행정중심복합도시 '' 건설안 에 대해 ` 군대라도 동원해 막고싶은 심정 '' 이라고 말했다는 일부 언론의 보도를 부인했다 . 27

  28. Sample Document McDonald's slims down spuds “Bag of Words” Fast-food chain to reduce certain types of fat in its french fries with new cooking oil. 14 × McDonalds NEW YORK (CNN/Money) - McDonald's Corp. is cutting the amount of "bad" fat in its french fries nearly in half, the fast-food chain said Tuesday as it 12 × fat moves to make all its fried menu items healthier. But does that mean the popular shoestring fries won't 11 × fries taste the same? The company says no. "It's a win-win for our customers because they are getting the same great french-fry taste along with an even healthier 8 × new nutrition profile," said Mike Roberts, president of McDonald's USA. 7 × french But others are not so sure. McDonald's will not specifically discuss the kind of oil it plans to use, but at least one nutrition expert says playing with the 6 × company, said, nutrition formula could mean a different taste. Shares of Oak Brook, Ill.-based McDonald's (MCD: 5 × food, oil, percent, reduce, down $0.54 to $23.22, Research, Estimates) were lower Tuesday afternoon. It was unclear Tuesday taste, Tuesday whether competitors Burger King and Wendy's International (WEN: down $0.80 to $34.91, Research, Estimates) would follow suit. Neither company could … immediately be reached for comment. … 28

  29. Counting Words… Documents case folding, tokenization, stopword removal, stemming Bag of Words syntax, semantics, word knowledge, etc. Inverted Index 29

  30. Count. Source: http://www.flickr.com/photos/guvnah/7861418602/ 30

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend