info 4300 cs4300 information retrieval slides adapted
play

INFO 4300 / CS4300 Information Retrieval slides adapted from - PowerPoint PPT Presentation

INFO 4300 / CS4300 Information Retrieval slides adapted from Hinrich Sch utzes, linked from http://informationretrieval.org/ IR 21/26: Linear Classifiers and Flat clustering Paul Ginsparg Cornell University, Ithaca, NY 12 Nov 2009 1 /


  1. INFO 4300 / CS4300 Information Retrieval slides adapted from Hinrich Sch¨ utze’s, linked from http://informationretrieval.org/ IR 21/26: Linear Classifiers and Flat clustering Paul Ginsparg Cornell University, Ithaca, NY 12 Nov 2009 1 / 36

  2. Overview Recap 1 Evaluation 2 How many clusters? 3 Discussion 4 2 / 36

  3. Outline Recap 1 Evaluation 2 How many clusters? 3 Discussion 4 3 / 36

  4. Linear classifiers Linear classifiers compute a linear combination or weighted sum � i w i x i of the feature values. Classification decision: � i w i x i > θ ? . . . where θ (the threshold) is a parameter. (First, we only consider binary classifiers.) Geometrically, this corresponds to a line (2D), a plane (3D) or a hyperplane (higher dimensionalities) Assumption: The classes are linearly separable. Can find hyperplane (=separator) based on training set Methods for finding separator: Perceptron, Rocchio, Naive Bayes – as we will explain on the next slides 4 / 36

  5. Which hyperplane? 5 / 36

  6. Which hyperplane? For linearly separable training sets: there are infinitely many separating hyperplanes. They all separate the training set perfectly . . . . . . but they behave differently on test data. Error rates on new data are low for some, high for others. How do we find a low-error separator? Perceptron: generally bad; Naive Bayes, Rocchio: ok; linear SVM: good 6 / 36

  7. Linear classifiers: Discussion Many common text classifiers are linear classifiers: Naive Bayes, Rocchio, logistic regression, linear support vector machines etc. Each method has a different way of selecting the separating hyperplane Huge differences in performance on test documents Can we get better performance with more powerful nonlinear classifiers? Not in general: A given amount of training data may suffice for estimating a linear boundary, but not for estimating a more complex nonlinear boundary. 7 / 36

  8. How to combine hyperplanes for > 2 classes? ? (e.g.: rank and select top-ranked classes) 8 / 36

  9. What is clustering? (Document) clustering is the process of grouping a set of documents into clusters of similar documents. Documents within a cluster should be similar. Documents from different clusters should be dissimilar. Clustering is the most common form of unsupervised learning. Unsupervised = there are no labeled or annotated data. 9 / 36

  10. Classification vs. Clustering Classification: supervised learning Clustering: unsupervised learning Classification: Classes are human-defined and part of the input to the learning algorithm. Clustering: Clusters are inferred from the data without human input. However, there are many ways of influencing the outcome of clustering: number of clusters, similarity measure, representation of documents, . . . 10 / 36

  11. Flat vs. Hierarchical clustering Flat algorithms Usually start with a random (partial) partitioning of docs into groups Refine iteratively Main algorithm: K -means Hierarchical algorithms Create a hierarchy Bottom-up, agglomerative Top-down, divisive 11 / 36

  12. Flat algorithms Flat algorithms compute a partition of N documents into a set of K clusters. Given: a set of documents and the number K Find: a partition in K clusters that optimizes the chosen partitioning criterion Global optimization: exhaustively enumerate partitions, pick optimal one Not tractable Effective heuristic method: K -means algorithm 12 / 36

  13. Set of points to be clustered b b b b b b b b b b b b b b b b b b b b 13 / 36

  14. Set of points to be clustered b b b b b b b b b b b b b b b b b b b b 14 / 36

  15. K -means Each cluster in K -means is defined by a centroid. Objective/partitioning criterion: minimize the average squared difference from the centroid Recall definition of centroid: µ ( ω ) = 1 � � � x | ω | x ∈ ω � where we use ω to denote a cluster. We try to find the minimum average squared difference by iterating two steps: reassignment: assign each vector to its closest centroid recomputation: recompute each centroid as the average of the vectors that were assigned to it in reassignment 15 / 36

  16. Random selection of initial cluster centers × b b b b b b b b b b b × b b b b b b b b Centroids after convergence? b 16 / 36

  17. Centroids and assignments after convergence 2 2 1 1 1 × 2 1 1 1 × 2 1 2 1 2 1 1 1 1 1 1 17 / 36

  18. k -means clustering Goal cluster similar data points Approach: given data points and distance function select k centroids � µ a assign � x i to closest centroid � µ a minimize � a , i d ( � x i , � µ a ) Algorithm: randomly pick centroids, possibly from data points assign points to closest centroid average assigned points to obtain new centroids repeat 2,3 until nothing changes Issues: - takes superpolynomial time on some inputs - not guaranteed to find optimal solution + converges quickly in practice 18 / 36

  19. Outline Recap 1 Evaluation 2 How many clusters? 3 Discussion 4 19 / 36

  20. What is a good clustering? Internal criteria Example of an internal criterion: RSS in K -means But an internal criterion often does not evaluate the actual utility of a clustering in the application. Alternative: External criteria Evaluate with respect to a human-defined classification 20 / 36

  21. External criteria for clustering quality Based on a gold standard data set, e.g., the Reuters collection we also used for the evaluation of classification Goal: Clustering should reproduce the classes in the gold standard (But we only want to reproduce how documents are divided into groups, not the class labels.) First measure for how well we were able to reproduce the classes: purity 21 / 36

  22. External criterion: Purity purity(Ω , C ) = 1 � max | ω k ∩ c j | N j k Ω = { ω 1 , ω 2 , . . . , ω K } is the set of clusters and C = { c 1 , c 2 , . . . , c J } is the set of classes. For each cluster ω k : find class c j with most members n kj in ω k Sum all n kj and divide by total number of points 22 / 36

  23. Example for computing purity cluster 1 cluster 2 cluster 3 x x x o x ⋄ o x o ⋄ ⋄ o ⋄ x x o x To compute purity: 5 = max j | ω 1 ∩ c j | (class x, cluster 1); 4 = max j | ω 2 ∩ c j | (class o, cluster 2); and 3 = max j | ω 3 ∩ c j | (class ⋄ , cluster 3). Purity is (1 / 17) × (5 + 4 + 3) = 12 / 17 ≈ 0 . 71. 23 / 36

  24. Rand index TP + TN Definition: RI = TP + FP + FN + TN Based on 2x2 contingency table of all pairs of documents: same cluster different clusters same class true positives (TP) false negatives (FN) different classes false positives (FP) true negatives (TN) TP+FN+FP+TN is the total number of pairs. � N � There are pairs for N documents. 2 � 17 � Example: = 136 in o/ ⋄ /x example 2 Each pair is either positive or negative (the clustering puts the two documents in the same or in different clusters) . . . . . . and either “true” (correct) or “false” (incorrect): the clustering decision is correct or incorrect. 24 / 36

  25. As an example, we compute RI for the o/ ⋄ /x example. We first compute TP + FP. The three clusters contain 6, 6, and 5 points, respectively, so the total number of “positives” or pairs of documents that are in the same cluster is: � 6 � 6 � 5 � � � TP + FP = + + = 40 2 2 2 Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster 3, and the x pair in cluster 3 are true positives: � 5 � 4 � 3 � 2 � � � � TP = + + + = 20 2 2 2 2 Thus, FP = 40 − 20 = 20. FN and TN are computed similarly. 25 / 36

  26. Rand measure for the o/ ⋄ /x example same cluster different clusters same class TP = 20 FN = 24 different classes FP = 20 TN = 72 RI is then (20 + 72) / (20 + 20 + 24 + 72) = 92 / 136 ≈ 0 . 68. 26 / 36

  27. Two other external evaluation measures Two other measures Normalized mutual information (NMI) How much information does the clustering contain about the classification? Singleton clusters (number of clusters = number of docs) have maximum MI Therefore: normalize by entropy of clusters and classes F measure Like Rand, but “precision” and “recall” can be weighted 27 / 36

  28. Evaluation results for the o/ ⋄ /x example purity NMI RI F 5 lower bound 0.0 0.0 0.0 0.0 maximum 1.0 1.0 1.0 1.0 value for example 0.71 0.36 0.68 0.46 All four measures range from 0 (really bad clustering) to 1 (perfect clustering). 28 / 36

  29. Outline Recap 1 Evaluation 2 How many clusters? 3 Discussion 4 29 / 36

  30. How many clusters? Either: Number of clusters K is given. Then partition into K clusters K might be given because there is some external constraint. Example: In the case of Scatter-Gather, it was hard to show more than 10–20 clusters on a monitor in the 90s. Or: Finding the “right” number of clusters is part of the problem. Given docs, find K for which an optimum is reached. How to define “optimum”? We can’t use RSS or average squared distance from centroid as criterion: always chooses K = N clusters. 30 / 36

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend