advanced analytics in business d0s07a big data platforms
play

Advanced Analytics in Business [D0S07a] Big Data Platforms & - PowerPoint PPT Presentation

Advanced Analytics in Business [D0S07a] Big Data Platforms & Technologies [D0S06a] Text Mining Overview Featurization Traditional approaches Word embeddings and representational learning A look ahead Tooling 2 Text is everywhere


  1. Advanced Analytics in Business [D0S07a] Big Data Platforms & Technologies [D0S06a] Text Mining

  2. Overview Featurization Traditional approaches Word embeddings and representational learning A look ahead Tooling 2

  3. Text is everywhere Medical records Product reviews Repair notes Facebook posts Book recommendations Tweets Declarations Legislation, court decisions Emails Websites … 3

  4. But it is unstructured Or “semi-structured”, rather, though still: No direct “feature vector” representation Linguistic structure Language Relationship between words Importance of words Negations, etc. Text is dirty Grammatical, spelling, abbreviations, homographs Text is intended for communication between people 4

  5. The trick for unstructured data It all boils down to featurization Approaches to convert the unstructured data to a structured feature vector “Feature engineering” Just as for computer vision, we can split up approaches in a “before and after deep learning” era 5

  6. Basic Text Featurization 6

  7. So, how to featurize? The goal is to take a collection of documents – each of which is a relatively free-form sequence of words – and turn it into the familiar feature-vector representation A collection of documents: corpus (plural: corpora) A document is composed of individual tokens or terms (words, but can be broader as well, i.e. punctuation tokens, abbreviations, smileys, …) Mostly: each document is one instance, but sentences can also form instances 7

  8. Bag of words The “keep it simple approach” Simply treat every document (instance) as a collection of individual tokens (words, most often) Ignore grammar, word order, sentence structure, and (usually) punctuation Treat every word in a document as a potentially important keyword of the document Each document is represented by sequence of ones (if the token is present in the document) or zeros (the token is not present in the document) I.e. one feature per word Inexpensive to generate though leads to an explosion of features Can work in some settings with careful filtering Alternatively, frequencies instead of binary features can be used (“term frequency matrix”) this is an a example second “This is an example” => 1 1 1 0 1 0 “This is a second example” => 1 1 0 1 1 1 8

  9. Normalization, stop-word removal, stemming “Can work in some settings with careful filtering” The case should be normalized E.g. every term is in lowercase Words should be stemmed (stemming) Suffixes are removed (only root is kept) E.g., noun plurals are transformed to singular forms Porter’s stemming algorithm: basically suffix-stripping More complex: lemmatization (e.g. better → good, flies/flight → fly) If context is required: part of speech (PoS) tagging (see later) Stop-words should be removed (stop word removal) A stop-word is a very common word in English (or whatever language is being parsed) Typically words such as “the”, “and”, “of”, “on” … are removed 9

  10. (Normalized) term frequency Recall that we can use bag of words with counts (word frequencies) Nice as this differentiates between how many times a word is used “Document-term matrix” However: Documents can be of various lengths Words of different frequencies Words should not be too common or too rare Both upper and lower limit on the number (or fraction) of documents in which a word may occur So the raw term frequencies are best normalized in some way Such as by dividing each by the total number of words in the document Or the frequency of the specific term in the corpus And as always: think about the production context; what do we do when we encounter a previously unseen term? 10

  11. TF-IDF Term frequency (TF) is calculated per term t per document d TF( t , d ) = |{ w ∈ d : w = t }| | d | Number of times term appears divided by document length However, frequency in the corpus also plays a role: terms should not be too rare, but also not be too common. So we need a measure of sparseness Inverse document frequency (IDF) is calculated per term t over the corpus c | c | IDF( t ) = 1 + log ( ) |{ d ∈ c : t ∈ d }| One plus logarithm of total number of documents divided by documents containing term 11

  12. TF-IDF 12

  13. TF-IDF TFIDF( t , d ) = TF( t , d ) × IDF( t ) Gives you a weight for each term in each document Perfect for our feature matrix Rewards terms that occur frequently in the document But penalizes if they occur frequently in the whole collection A vector of weights per document is obtained 13

  14. TF-IDF: example 15 prominent jazz musicians and excerpts of their biographies from Wikipedia Nearly 2,000 features after stemming and stop-word removal! Consider the sample phrase “Famous jazz saxophonist born in Kansas who played bebop and latin” 14

  15. Dealing with high dimensionality Feature selection will often need to be applied Fast and scalable classification or clustering techniques will need to be used E.g. linear Naive Bayes and Support Vector Machines have proven to work well in this setting Using clustering techniques based on non-negative matrix factorization Also recall from pre-processing: “the hashing trick”: collapse the high amount of features to n hashed features Use dimensionality reduction techniques like t-SNE or UMAP 15

  16. N-gram sequences What if word order is important? A next step from the previous techniques is to use sequences of adjacent words as terms Adjacent pairs are commonly called bi-grams Example: “The quick brown fox jumps” Would be transformed into {quick_brown, brown_fox, fox_jumps} Can be combined together with 1-grams: {quick, brown, fox, jumps} But: N-grams greatly increase the size of the feature set 16

  17. Natural language processing (NLP) Key idea: use machine learning and statistical approaches to learn a language from data Better suited to deal with Contextual information (“This camera sucks” vs. “This vacuum cleaner really sucks”) Negations Sarcasm Best known tasks: PoS (Part of Speech) tagging: noun, verb, subject, … Named entity recognition 17

  18. Part of speech 18

  19. Named entity recognition 19

  20. Named entity recognition 20

  21. Word Embeddings and Representational Learning 21

  22. Vector space models Represent an item (a document, a word) as a vector of numbers: banana 0 1 0 1 0 0 2 0 1 0 1 0 Such a vector could for instance correspond to documents in which the word occurs: banana 0 1 0 1 0 0 2 0 1 0 1 0 ↓ ↓ ↓ ↓ ↓ Doc2 Doc4 Doc7 Doc9 Doc11 Recall: before we had an opposite representation: vector describing word presence per document 22

  23. Vector space models The vector can also correspond to neighboring word context banana 0 1 0 1 0 0 1 0 1 0 1 0 ↓ ↓ ↓ ↓ ↓ (yellow) (on) (grows) (tree) (africa) “yellow banana grows on trees in africa” 23

  24. Word embeddings The goal is to construct a dense vector of real values per word The vector dimension is typically much smaller than the number of items (the vocabulary size) You can imagine the vectors as coordinates for items in the “embedding space” In other words: for each item (word), we obtain a representation (a vector of real values) Distance metrics can be used to define a notion of relatedness between items in this space What defines a “good” embedding is a good thinking point E.g. in essence dimensionality reduction techniques embed as well in a reduced feature space What is a good embedding in e.g. unsupervised, supervised contexts? For text, structured data, imagery? 24

  25. Word embeddings Man is to woman as king is to ____ ? Good is to best as smart is to ____ ? China is to Beijing as Russia is to ____ ? [king] – [man] + [woman] ≈ [queen] In turns out that using the idea of context leads to embeddings that can be for similarity and analogy tasks 25

  26. How to construct word embeddings? Matrix factorization based Non-negative matrix factorization GloVe (Word-NeighboringWord) See https://nlp.stanford.edu/projects/glove/ Neural network based: word2vec word2vec released by Google in 2013 See https://code.google.com/archive/p/word2vec/ Neural network-based implementation that learns vector representation per word Background information https://blog.acolyer.org/2016/04/21/the-amazing-power-of-word-vectors/ http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/ https://blog.acolyer.org/2016/04/22/glove-global-vectors-for-word-representation/ https://iksinc.wordpress.com/tag/continuous-bag-of-words-cbow/ https://multithreaded.stitchfix.com/blog/2017/10/18/stop-using-word2vec/ https://blog.insightdatascience.com/how-to-solve-90-of-nlp-problems-a-step-by-step-guide-fda605278e4e 26

  27. word2vec Word2vec: convert each term to a vector representation Works at term level, not at document level (by default) Such a vector comes to represent in some abstract way the “meaning” of a word Possible to learn word vectors that are able to capture the relationships between words in a surprisingly expressive way “Man is to woman as uncle is to (?)” 27

  28. word2vec Here, the word vectors have a dimension of 1000 So it’s common to apply a second-step PCA or t-SNE possible to project to two-dimensions 28

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend