INFO 4300 / CS4300 Information Retrieval slides adapted from - - PowerPoint PPT Presentation

info 4300 cs4300 information retrieval slides adapted
SMART_READER_LITE
LIVE PREVIEW

INFO 4300 / CS4300 Information Retrieval slides adapted from - - PowerPoint PPT Presentation

INFO 4300 / CS4300 Information Retrieval slides adapted from Hinrich Sch utzes, linked from http://informationretrieval.org/ IR 21/26: Linear Classifiers and Flat clustering Paul Ginsparg Cornell University, Ithaca, NY 12 Nov 2009 1 /


slide-1
SLIDE 1

INFO 4300 / CS4300 Information Retrieval slides adapted from Hinrich Sch¨ utze’s, linked from http://informationretrieval.org/

IR 21/26: Linear Classifiers and Flat clustering

Paul Ginsparg

Cornell University, Ithaca, NY

12 Nov 2009

1 / 36

slide-2
SLIDE 2

Overview

1

Recap

2

Evaluation

3

How many clusters?

4

Discussion

2 / 36

slide-3
SLIDE 3

Outline

1

Recap

2

Evaluation

3

How many clusters?

4

Discussion

3 / 36

slide-4
SLIDE 4

Linear classifiers

Linear classifiers compute a linear combination or weighted sum

i wixi of the feature values.

Classification decision:

i wixi > θ?

. . . where θ (the threshold) is a parameter. (First, we only consider binary classifiers.) Geometrically, this corresponds to a line (2D), a plane (3D) or a hyperplane (higher dimensionalities) Assumption: The classes are linearly separable. Can find hyperplane (=separator) based on training set Methods for finding separator: Perceptron, Rocchio, Naive Bayes – as we will explain on the next slides

4 / 36

slide-5
SLIDE 5

Which hyperplane?

5 / 36

slide-6
SLIDE 6

Which hyperplane?

For linearly separable training sets: there are infinitely many separating hyperplanes. They all separate the training set perfectly . . . . . . but they behave differently on test data. Error rates on new data are low for some, high for others. How do we find a low-error separator? Perceptron: generally bad; Naive Bayes, Rocchio: ok; linear SVM: good

6 / 36

slide-7
SLIDE 7

Linear classifiers: Discussion

Many common text classifiers are linear classifiers: Naive Bayes, Rocchio, logistic regression, linear support vector machines etc. Each method has a different way of selecting the separating hyperplane

Huge differences in performance on test documents

Can we get better performance with more powerful nonlinear classifiers? Not in general: A given amount of training data may suffice for estimating a linear boundary, but not for estimating a more complex nonlinear boundary.

7 / 36

slide-8
SLIDE 8

How to combine hyperplanes for > 2 classes?

?

(e.g.: rank and select top-ranked classes)

8 / 36

slide-9
SLIDE 9

What is clustering?

(Document) clustering is the process of grouping a set of documents into clusters of similar documents. Documents within a cluster should be similar. Documents from different clusters should be dissimilar. Clustering is the most common form of unsupervised learning. Unsupervised = there are no labeled or annotated data.

9 / 36

slide-10
SLIDE 10

Classification vs. Clustering

Classification: supervised learning Clustering: unsupervised learning Classification: Classes are human-defined and part of the input to the learning algorithm. Clustering: Clusters are inferred from the data without human input.

However, there are many ways of influencing the outcome of clustering: number of clusters, similarity measure, representation of documents, . . .

10 / 36

slide-11
SLIDE 11

Flat vs. Hierarchical clustering

Flat algorithms

Usually start with a random (partial) partitioning of docs into groups Refine iteratively Main algorithm: K-means

Hierarchical algorithms

Create a hierarchy Bottom-up, agglomerative Top-down, divisive

11 / 36

slide-12
SLIDE 12

Flat algorithms

Flat algorithms compute a partition of N documents into a set of K clusters. Given: a set of documents and the number K Find: a partition in K clusters that optimizes the chosen partitioning criterion Global optimization: exhaustively enumerate partitions, pick

  • ptimal one

Not tractable

Effective heuristic method: K-means algorithm

12 / 36

slide-13
SLIDE 13

Set of points to be clustered

b b b b b b b b b b b b b b b b b b b b

13 / 36

slide-14
SLIDE 14

Set of points to be clustered

b b b b b b b b b b b b b b b b b b b b

14 / 36

slide-15
SLIDE 15

K-means

Each cluster in K-means is defined by a centroid. Objective/partitioning criterion: minimize the average squared difference from the centroid Recall definition of centroid:

  • µ(ω) = 1

|ω|

  • x∈ω
  • x

where we use ω to denote a cluster. We try to find the minimum average squared difference by iterating two steps:

reassignment: assign each vector to its closest centroid recomputation: recompute each centroid as the average of the vectors that were assigned to it in reassignment

15 / 36

slide-16
SLIDE 16

Random selection of initial cluster centers

× ×

b b b b b b b b b b b b b b b b b b b b

Centroids after convergence?

16 / 36

slide-17
SLIDE 17

Centroids and assignments after convergence

2 2 2 2 1 1 1 1 2 2 1 1 1 1 1 1 1 1 1 1

× ×

17 / 36

slide-18
SLIDE 18

k-means clustering

Goal cluster similar data points Approach: given data points and distance function select k centroids µa assign xi to closest centroid µa minimize

a,i d(

xi, µa) Algorithm: randomly pick centroids, possibly from data points assign points to closest centroid average assigned points to obtain new centroids repeat 2,3 until nothing changes Issues:

  • takes superpolynomial time on some inputs
  • not guaranteed to find optimal solution

+ converges quickly in practice

18 / 36

slide-19
SLIDE 19

Outline

1

Recap

2

Evaluation

3

How many clusters?

4

Discussion

19 / 36

slide-20
SLIDE 20

What is a good clustering?

Internal criteria

Example of an internal criterion: RSS in K-means

But an internal criterion often does not evaluate the actual utility of a clustering in the application. Alternative: External criteria

Evaluate with respect to a human-defined classification

20 / 36

slide-21
SLIDE 21

External criteria for clustering quality

Based on a gold standard data set, e.g., the Reuters collection we also used for the evaluation of classification Goal: Clustering should reproduce the classes in the gold standard (But we only want to reproduce how documents are divided into groups, not the class labels.) First measure for how well we were able to reproduce the classes: purity

21 / 36

slide-22
SLIDE 22

External criterion: Purity

purity(Ω, C) = 1 N

  • k

max

j

|ωk ∩ cj| Ω = {ω1, ω2, . . . , ωK} is the set of clusters and C = {c1, c2, . . . , cJ} is the set of classes. For each cluster ωk: find class cj with most members nkj in ωk Sum all nkj and divide by total number of points

22 / 36

slide-23
SLIDE 23

Example for computing purity

x

  • x

x x x

  • x
  • x

⋄ ⋄ ⋄ x cluster 1 cluster 2 cluster 3 To compute purity: 5 = maxj |ω1 ∩ cj| (class x, cluster 1); 4 = maxj |ω2 ∩ cj| (class o, cluster 2); and 3 = maxj |ω3 ∩ cj| (class ⋄, cluster 3). Purity is (1/17) × (5 + 4 + 3) = 12/17 ≈ 0.71.

23 / 36

slide-24
SLIDE 24

Rand index

Definition: RI = TP+TN TP+FP+FN+TN Based on 2x2 contingency table of all pairs of documents: same cluster different clusters same class true positives (TP) false negatives (FN) different classes false positives (FP) true negatives (TN) TP+FN+FP+TN is the total number of pairs. There are N

2

  • pairs for N documents.

Example: 17

2

  • = 136 in o/⋄/x example

Each pair is either positive or negative (the clustering puts the two documents in the same or in different clusters) . . . . . . and either “true” (correct) or “false” (incorrect): the clustering decision is correct or incorrect.

24 / 36

slide-25
SLIDE 25

As an example, we compute RI for the o/⋄/x example. We first compute TP + FP. The three clusters contain 6, 6, and 5 points, respectively, so the total number of “positives” or pairs of documents that are in the same cluster is: TP + FP = 6 2

  • +

6 2

  • +

5 2

  • = 40

Of these, the x pairs in cluster 1, the o pairs in cluster 2, the ⋄ pairs in cluster 3, and the x pair in cluster 3 are true positives: TP = 5 2

  • +

4 2

  • +

3 2

  • +

2 2

  • = 20

Thus, FP = 40 − 20 = 20. FN and TN are computed similarly.

25 / 36

slide-26
SLIDE 26

Rand measure for the o/⋄/x example

same cluster different clusters same class TP = 20 FN = 24 different classes FP = 20 TN = 72 RI is then (20 + 72)/(20 + 20 + 24 + 72) = 92/136 ≈ 0.68.

26 / 36

slide-27
SLIDE 27

Two other external evaluation measures

Two other measures Normalized mutual information (NMI)

How much information does the clustering contain about the classification? Singleton clusters (number of clusters = number of docs) have maximum MI Therefore: normalize by entropy of clusters and classes

F measure

Like Rand, but “precision” and “recall” can be weighted

27 / 36

slide-28
SLIDE 28

Evaluation results for the o/⋄/x example

purity NMI RI F5 lower bound 0.0 0.0 0.0 0.0 maximum 1.0 1.0 1.0 1.0 value for example 0.71 0.36 0.68 0.46 All four measures range from 0 (really bad clustering) to 1 (perfect clustering).

28 / 36

slide-29
SLIDE 29

Outline

1

Recap

2

Evaluation

3

How many clusters?

4

Discussion

29 / 36

slide-30
SLIDE 30

How many clusters?

Either: Number of clusters K is given.

Then partition into K clusters K might be given because there is some external constraint. Example: In the case of Scatter-Gather, it was hard to show more than 10–20 clusters on a monitor in the 90s.

Or: Finding the “right” number of clusters is part of the problem.

Given docs, find K for which an optimum is reached. How to define “optimum”? We can’t use RSS or average squared distance from centroid as criterion: always chooses K = N clusters.

30 / 36

slide-31
SLIDE 31

Exercise

Suppose we want to analyze the set of all articles published by a major newspaper (e.g., New York Times or S¨ uddeutsche Zeitung) in 2008. Goal: write a two-page report about what the major news stories in 2008 were. We want to use K-means clustering to find the major news stories. How would you determine K?

31 / 36

slide-32
SLIDE 32

Simple objective function for K (1)

Basic idea:

Start with 1 cluster (K = 1) Keep adding clusters (= keep increasing K) Add a penalty for each new cluster

Trade off cluster penalties against average squared distance from centroid Choose K with best tradeoff

32 / 36

slide-33
SLIDE 33

Simple objective function for K (2)

Given a clustering, define the cost for a document as (squared) distance to centroid Define total distortion RSS(K) as sum of all individual document costs (corresponds to average distance) Then: penalize each cluster with a cost λ Thus for a clustering with K clusters, total cluster penalty is Kλ Define the total cost of a clustering as distortion plus total cluster penalty: RSS(K) + Kλ Select K that minimizes (RSS(K) + Kλ) Still need to determine good value for λ . . .

33 / 36

slide-34
SLIDE 34

Finding the “knee” in the curve

2 4 6 8 10 1750 1800 1850 1900 1950 number of clusters residual sum of squares

Pick the number of clusters where curve “flattens”. Here: 4 or 9.

34 / 36

slide-35
SLIDE 35

Outline

1

Recap

2

Evaluation

3

How many clusters?

4

Discussion

35 / 36

slide-36
SLIDE 36

Discussion 6

Jeffrey Dean and Sanjay Ghemawat, MapReduce: Simplified Data Processing on Large Clusters. Usenix SDI ’04, 2004. http://www.usenix.org/events/osdi04/tech/full papers/dean/dean.pdf See also (Jan 2009):

http://michaelnielsen.org/blog/write-your-first-mapreduce-program-in-20-minutes/

part of lectures on “google technology stack”:

http://michaelnielsen.org/blog/lecture-course-the-google-technology-stack/

(including PageRank, etc.) See Recap Lecture 22 for slides

36 / 36