info 4300 cs4300 information retrieval slides adapted
play

INFO 4300 / CS4300 Information Retrieval slides adapted from - PowerPoint PPT Presentation

INFO 4300 / CS4300 Information Retrieval slides adapted from Hinrich Sch utzes, linked from http://informationretrieval.org/ IR 23/25: Hierarchical Clustering & Text Classification Redux Paul Ginsparg Cornell University, Ithaca, NY


  1. INFO 4300 / CS4300 Information Retrieval slides adapted from Hinrich Sch¨ utze’s, linked from http://informationretrieval.org/ IR 23/25: Hierarchical Clustering & Text Classification Redux Paul Ginsparg Cornell University, Ithaca, NY 22 Nov 2011 1 / 73

  2. Administrativa Assignment 4 due Fri 2 Dec (extended to Sun 4 Dec). (Note added part 0: non-programming questions, practice for final exam) Discussion 5 (Tues 28 Nov): Peter Norvig, “How to Write a Spelling Corrector” http://norvig.com/spell-correct.html Recall also relevant sections from Peter Norvig, The Unreasonable Effectiveness of Data (YouTube) given 23 Sep 2010: http://www.youtube.com/watch?v=yvDCzhbjYWs (assigned for 25 Oct) 2 / 73

  3. Overview Recap 1 Centroid/GAAC 2 Variants 3 Feature selection 4 Text classification 5 Naive Bayes 6 3 / 73

  4. Outline Recap 1 Centroid/GAAC 2 Variants 3 Feature selection 4 Text classification 5 Naive Bayes 6 4 / 73

  5. Hierarchical agglomerative clustering (HAC) HAC creates a hierachy in the form of a binary tree. Assumes a similarity measure for determining the similarity of two clusters. Up to now, our similarity measures were for documents. We will look at four different cluster similarity measures. 5 / 73

  6. Key question: How to define cluster similarity Single-link: Maximum similarity Maximum similarity of any two documents Complete-link: Minimum similarity Minimum similarity of any two documents Centroid: Average “intersimilarity” Average similarity of all document pairs (but excluding pairs of docs in the same cluster) This is equivalent to the similarity of the centroids. Group-average: Average “intrasimilarity” Average similary of all document pairs, including pairs of docs in the same cluster 6 / 73

  7. Single-link: Maximum similarity 4 3 b b b b b b b b 2 b b b b b b b b b b b b 1 0 0 1 2 3 4 5 6 7 7 / 73

  8. Complete-link: Minimum similarity 4 3 b b b b b b b b 2 b b b b b b b b b b b b 1 0 0 1 2 3 4 5 6 7 8 / 73

  9. Centroid: Average intersimilarity 4 3 b b b b b b b b 2 b b b b b b b b b b b b 1 0 0 1 2 3 4 5 6 7 9 / 73

  10. Group average: Average intrasimilarity 4 3 b b b b b b b b 2 b b b b b b b b b b b b 1 0 0 1 2 3 4 5 6 7 10 / 73

  11. Complete-link dendrogram 1.0 0.8 0.6 0.4 0.2 0.0 Notice that this NYSE closing averages dendrogram is much Hog prices tumble Oil prices slip more balanced than Ag trade reform. Chrysler / Latin America the single-link one. Japanese prime minister / Mexico Fed holds interest rates steady We can create a Fed to keep interest rates steady Fed keeps interest rates steady Fed keeps interest rates steady 2-cluster clustering Mexican markets British FTSE index with two clusters of War hero Colin Powell War hero Colin Powell about the same size. Lloyd’s CEO questioned Lloyd’s chief / U.S. grilling Ohio Blue Cross Lawsuit against tobacco companies suits against tobacco firms Indiana tobacco lawsuit Viag stays positive Most active stocks CompuServe reports loss Sprint / Internet access service Planet Hollywood Trocadero: tripling of revenues Back−to−school spending is up German unions split Chains may raise prices Clinton signs law 11 / 73

  12. Exercise: Compute single and complete link clusterings d 1 d 2 d 3 d 4 × × × × 3 2 d 5 d 6 d 7 d 8 × × × × 1 0 0 1 2 3 4 12 / 73

  13. Single-link clustering d 1 d 2 d 3 d 4 × × × × 3 2 d 5 d 6 d 7 d 8 × × × × 1 0 0 1 2 3 4 13 / 73

  14. Complete link clustering d 1 d 2 d 3 d 4 × × × × 3 2 d 5 d 6 d 7 d 8 × × × × 1 0 0 1 2 3 4 14 / 73

  15. Single-link vs. Complete link clustering d 1 d 2 d 3 d 4 d 1 d 2 d 3 d 4 × × × × × × × × 3 3 2 2 d 5 d 6 d 7 d 8 d 5 d 6 d 7 d 8 × × × × × × × × 1 1 0 0 0 1 2 3 4 0 1 2 3 4 15 / 73

  16. Single-link: Chaining × × × × × × 2 × × × × × × 1 0 0 1 2 3 4 5 6 Single-link clustering often produces long, straggly clusters. For most applications, these are undesirable. 16 / 73

  17. What 2-cluster clustering will complete-link produce? d 1 d 2 d 3 d 4 d 5 × × × × × 1 0 0 1 2 3 4 5 6 7 Coordinates: 1 + 2 ε, 4 , 5 + 2 ε, 6 , 7 − ε , so that distance( d 2 , d 1 ) = 3 − 2 ε is less than distance( d 2 , d 5 ) = 3 − ε and d 2 joins d 1 rather than d 3 , d 4 , d 5 . 17 / 73

  18. What 2-cluster clustering will complete-link produce? d 1 d 2 d 3 d 4 d 5 × × × × × 1 0 0 1 2 3 4 5 6 7 Coordinates: 1 + 2 ε, 4 , 5 + 2 ε, 6 , 7 − ε , so that distance( d 2 , d 1 ) = 3 − 2 ε is less than distance( d 2 , d 5 ) = 3 − ε and d 2 joins d 1 rather than d 3 , d 4 , d 5 . 18 / 73

  19. Complete-link: Sensitivity to outliers d 1 d 2 d 3 d 4 d 5 × × × × × 1 0 0 1 2 3 4 5 6 7 The complete-link clustering of this set splits d 2 from its right neighbors – clearly undesirable. The reason is the outlier d 1 . This shows that a single outlier can negatively affect the outcome of complete-link clustering. Single-link clustering does better in this case. 19 / 73

  20. Outline Recap 1 Centroid/GAAC 2 Variants 3 Feature selection 4 Text classification 5 Naive Bayes 6 20 / 73

  21. Centroid HAC The similarity of two clusters is the average intersimilarity – the average similarity of documents from the first cluster with documents from the second cluster. A naive implementation of this definition is inefficient ( O ( N 2 )), but the definition is equivalent to computing the similarity of the centroids: sim-cent ( ω i , ω j ) = � µ ( ω i ) · � µ ( ω j ) � 1 � 1 1 � � � � � � � � � d m · � = d m · d m = d n N i N j N i N j � � � � d m ∈ ω i d m ∈ ω j d m ∈ ω i d n ∈ ω j Hence the name: centroid HAC Note: this is the dot product, not cosine similarity! 21 / 73

  22. Exercise: Compute centroid clustering × d 1 × d 3 5 4 × d 2 × d 4 3 2 × × d 6 1 d 5 0 0 1 2 3 4 5 6 7 22 / 73

  23. Centroid clustering × d 1 × d 3 5 c µ 2 b 4 × d 2 × d 4 3 µ 3 2 b c × × d 6 c b 1 d 5 µ 1 0 0 1 2 3 4 5 6 7 23 / 73

  24. Inversion in centroid clustering In an inversion, the similarity increases during a merge sequence. Results in an “inverted” dendrogram. √ Below: d 1 = (1 + ε, 1), d 2 = (5 , 1), d 3 = (3 , 1 + 2 3) Similarity of the first merger ( d 1 ∪ d 2 ) is -4.0, similarity of second merger (( d 1 ∪ d 2 ) ∪ d 3 ) is ≈ − 3 . 5. d 3 5 × − 4 4 − 3 3 − 2 2 d 1 d 2 − 1 × × 1 c b 0 0 d 1 d 2 d 3 0 1 2 3 4 5 24 / 73

  25. Inversions Hierarchical clustering algorithms that allow inversions are inferior. The rationale for hierarchical clustering is that at any given point, we’ve found the most coherent clustering of a given size. Intuitively: smaller clusterings should be more coherent than larger clusterings. An inversion contradicts this intuition: we have a large cluster that is more coherent than one of its subclusters. 25 / 73

  26. Group-average agglomerative clustering (GAAC) GAAC also has an “average-similarity” criterion, but does not have inversions. idea is that next merge cluster ω k = ω i ∩ ω j should be coherent: look at all doc–doc similarities within ω k , including those within ω i and within ω j The similarity of two clusters is the average intrasimilarity – the average similarity of all document pairs (including those from the same cluster). But we exclude self-similarities. 26 / 73

  27. Group-average agglomerative clustering (GAAC) Again, a naive implementation is inefficient ( O ( N 2 )) and there is an equivalent, more efficient, centroid-based definition: 1 � � � d m · � sim-ga ( ω i , ω j ) = d n ( N i + N j )( N i + N j − 1) d m ∈ ω i ∪ ω j d n ∈ ω i ∪ ω j d n � = d m 1 � 2 − ( N i + N j ) �� � � � = d m ( N i + N j )( N i + N j − 1) d m ∈ ω i ∪ ω j Again, this is the dot product, not cosine similarity. 27 / 73

  28. Which HAC clustering should I use? Don’t use centroid HAC because of inversions. In most cases: GAAC is best since it isn’t subject to chaining and sensitivity to outliers. However, we can only use GAAC for vector representations. For other types of document representations (or if only pairwise similarities for document are available): use complete-link. There are also some applications for single-link (e.g., duplicate detection in web search). 28 / 73

  29. Flat or hierarchical clustering? For high efficiency, use flat clustering (or perhaps bisecting k -means) For deterministic results: HAC When a hierarchical structure is desired: hierarchical algorithm HAC also can be applied if K cannot be predetermined (can start without knowing K ) 29 / 73

  30. Outline Recap 1 Centroid/GAAC 2 Variants 3 Feature selection 4 Text classification 5 Naive Bayes 6 30 / 73

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend