DATA MINING LECTURE 7
Clustering The k-means algorithm Hierarchical Clustering The DBSCAN algorithm Clustering Evaluation
LECTURE 7 Clustering The k-means algorithm Hierarchical Clustering - - PowerPoint PPT Presentation
DATA MINING LECTURE 7 Clustering The k-means algorithm Hierarchical Clustering The DBSCAN algorithm Clustering Evaluation What is a Clustering? In general a grouping of objects such that the objects in a group (cluster) are similar (or
Clustering The k-means algorithm Hierarchical Clustering The DBSCAN algorithm Clustering Evaluation
group (cluster) are similar (or related) to one another and different from (or unrelated to) the objects in other groups
Inter-cluster distances are maximized Intra-cluster distances are minimized
browsing, genes and proteins that have similar functionality, stocks with similar price fluctuations, users with same behavior
sets
Discovered Clusters Industry Group
Applied-Matl-DOWN,Bay-Network-Down,3-COM-DOWN, Cabletron-Sys-DOWN,CISCO-DOWN,HP-DOWN, DSC-Comm-DOWN,INTEL-DOWN,LSI-Logic-DOWN, Micron-Tech-DOWN,Texas-Inst-Down,Tellabs-Inc-Down, Natl-Semiconduct-DOWN,Oracl-DOWN,SGI-DOWN, Sun-DOWN
Technology1-DOWN
Apple-Comp-DOWN,Autodesk-DOWN,DEC-DOWN, ADV-Micro-Device-DOWN,Andrew-Corp-DOWN, Computer-Assoc-DOWN,Circuit-City-DOWN, Compaq-DOWN, EMC-Corp-DOWN, Gen-Inst-DOWN, Motorola-DOWN,Microsoft-DOWN,Scientific-Atl-DOWN
Technology2-DOWN
Fannie-Mae-DOWN,Fed-Home-Loan-DOWN, MBNA-Corp-DOWN,Morgan-Stanley-DOWN
Financial-DOWN
Baker-Hughes-UP,Dresser-Inds-UP,Halliburton-HLD-UP, Louisiana-Land-UP,Phillips-Petro-UP,Unocal-UP, Schlumberger-UP
Oil-UP
Clustering precipitation in Australia
How many clusters? Four Clusters Two Clusters Six Clusters
that each data object is in exactly one subset
tree
Original Points A Partitional Clustering
p4 p1 p3 p2 p4 p1 p3 p2
p4 p1 p2 p3 p4 p1 p2 p3 Traditional Hierarchical Clustering Non-traditional Hierarchical Clustering Non-traditional Dendrogram Traditional Dendrogram
multiple clusters.
with some weight between 0 and 1
data
closer (or more similar) to every other point in the cluster than to any point not in the cluster.
3 well-separated clusters
closer (more similar) to the “center” of a cluster, than to the center of any other cluster
distances from all the points in the cluster, or a medoid, the most “representative” point of a cluster
4 center-based clusters
closer (or more similar) to one or more other points in the cluster than to any point not in the cluster.
8 contiguous clusters
low-density regions, from other regions of high density.
noise and outliers are present.
6 density-based clusters
a particular concept. .
2 Overlapping Circles
and evaluate the `goodness' of each potential set of clusters by using the given objective function. (NP Hard)
data to a parameterized model.
determine the clustering
𝑗=1 𝑙 𝑦∈𝐷𝑗
minimizing the Sum of Squares Error (SSE) function
space and an integer K group the points into K clusters 𝐷 = {𝐷1, 𝐷2, … , 𝐷𝑙} such that 𝐷𝑝𝑡𝑢 𝐷 =
𝑗=1 𝑙 𝑦∈𝐷𝑗
𝑦 − 𝑑𝑗 2 is minimized, where 𝑑𝑗 is the mean of the points in cluster 𝐷𝑗
Sum of Squares Error (SSE)
0.5 1 1.5 2 0.5 1 1.5 2 2.5 3
x y
0.5 1 1.5 2 0.5 1 1.5 2 2.5 3
x y
Sub-optimal Clustering
0.5 1 1.5 2 0.5 1 1.5 2 2.5 3
x y
Optimal Clustering Original Points
0.5 1 1.5 2 0.5 1 1.5 2 2.5 3
x y
Iteration 1
0.5 1 1.5 2 0.5 1 1.5 2 2.5 3
x y
Iteration 2
0.5 1 1.5 2 0.5 1 1.5 2 2.5 3
x y
Iteration 3
0.5 1 1.5 2 0.5 1 1.5 2 2.5 3
x y
Iteration 4
0.5 1 1.5 2 0.5 1 1.5 2 2.5 3
x y
Iteration 5
0.5 1 1.5 2 0.5 1 1.5 2 2.5 3
x y
Iteration 6
0.5 1 1.5 2 0.5 1 1.5 2 2.5 3
x y
Iteration 1
0.5 1 1.5 2 0.5 1 1.5 2 2.5 3
x y
Iteration 2
0.5 1 1.5 2 0.5 1 1.5 2 2.5 3
x y
Iteration 3
0.5 1 1.5 2 0.5 1 1.5 2 2.5 3
x y
Iteration 4
0.5 1 1.5 2 0.5 1 1.5 2 2.5 3
x y
Iteration 5
0.5 1 1.5 2 0.5 1 1.5 2 2.5 3
x y
Iteration 6
0.5 1 1.5 2 0.5 1 1.5 2 2.5 3
x y
Iteration 1
0.5 1 1.5 2 0.5 1 1.5 2 2.5 3
x y
Iteration 2
0.5 1 1.5 2 0.5 1 1.5 2 2.5 3
x y
Iteration 3
0.5 1 1.5 2 0.5 1 1.5 2 2.5 3
x y
Iteration 4
0.5 1 1.5 2 0.5 1 1.5 2 2.5 3
x y
Iteration 5
0.5 1 1.5 2 0.5 1 1.5 2 2.5 3
x y
Iteration 1
0.5 1 1.5 2 0.5 1 1.5 2 2.5 3
x y
Iteration 2
0.5 1 1.5 2 0.5 1 1.5 2 2.5 3
x y
Iteration 3
0.5 1 1.5 2 0.5 1 1.5 2 2.5 3
x y
Iteration 4
0.5 1 1.5 2 0.5 1 1.5 2 2.5 3
x y
Iteration 5
distance function
etc.
similarity
iterations.
relatively few points change clusters’
Original Points K-means (3 Clusters)
Original Points K-means (3 Clusters)
Original Points K-means (2 Clusters)
Original Points K-means Clusters
One solution is to use many clusters. Find parts of clusters, but need to put together.
Original Points K-means Clusters
Original Points K-means Clusters
two points in the cluster.
k clusters) left
are k clusters)
distance matrix
merges or splits
1 3 2 5 4 6 0.05 0.1 0.15 0.2
1 2 3 4 5 6 1 2 3 4 5
‘cutting’ the dendogram at the proper level
phylogeny reconstruction, …)
1.
Compute the proximity matrix
2.
Let each data point be a cluster
3.
Repeat
4.
Merge the two closest clusters
5.
Update the proximity matrix
6.
Until only a single cluster remains
clusters distinguish the different algorithms
p1 p3 p5 p4 p2 p1 p2 p3 p4 p5
. . .
. . .
Proximity Matrix
p1 p2 p3 p4 p9 p10 p11 p12
C1 C4 C2 C5 C3 C2 C1 C1 C3 C5 C4 C2 C3 C4 C5
Proximity Matrix
p1 p2 p3 p4 p9 p10 p11 p12
update the proximity matrix.
C1 C4 C2 C5 C3 C2 C1 C1 C3 C5 C4 C2 C3 C4 C5
Proximity Matrix
p1 p2 p3 p4 p9 p10 p11 p12
C1 C4 C2 U C5 C3 ? ? ? ? ? ? ? C2 U C5 C1 C1 C3 C4 C2 U C5 C3 C4
Proximity Matrix
p1 p2 p3 p4 p9 p10 p11 p12
p1 p3 p5 p4 p2 p1 p2 p3 p4 p5
. . . . . . Similarity?
MIN MAX Group Average Distance Between Centroids Other methods driven by an objective
function
– Ward’s Method uses squared error Proximity Matrix
p1 p3 p5 p4 p2 p1 p2 p3 p4 p5
. . . . . .
Proximity Matrix
MIN MAX Group Average Distance Between Centroids Other methods driven by an objective
function
– Ward’s Method uses squared error
p1 p3 p5 p4 p2 p1 p2 p3 p4 p5
. . . . . .
Proximity Matrix
MIN MAX Group Average Distance Between Centroids Other methods driven by an objective
function
– Ward’s Method uses squared error
p1 p3 p5 p4 p2 p1 p2 p3 p4 p5
. . . . . .
Proximity Matrix
MIN MAX Group Average Distance Between Centroids Other methods driven by an objective
function
– Ward’s Method uses squared error
p1 p3 p5 p4 p2 p1 p2 p3 p4 p5
. . . . . .
Proximity Matrix
MIN MAX Group Average Distance Between Centroids Other methods driven by an objective
function
– Ward’s Method uses squared error
single pair of elements is linked
when all pairs of elements have been linked.
Nested Clusters Dendrogram
1 2 3 4 5 6 1 2 3 4 5
3 6 2 5 4 1 0.05 0.1 0.15 0.2
1 2 3 4 5 6 1 .24 .22 .37 .34 .23 2 .24 .15 .20 .14 .25 3 .22 .15 .15 .28 .11 4 .37 .20 .15 .29 .22 5 .34 .14 .28 .29 .39 6 .23 .25 .11 .22 .39 1 2 3 4 5 6 1 .24 .22 .37 .34 .23 2 .24 .15 .20 .14 .25 3 .22 .15 .15 .28 .11 4 .37 .20 .15 .29 .22 5 .34 .14 .28 .29 .39 6 .23 .25 .11 .22 .39 1 2 3 4 5 6 1 .24 .22 .37 .34 .23 2 .24 .15 .20 .14 .25 3 .22 .15 .15 .28 .11 4 .37 .20 .15 .29 .22 5 .34 .14 .28 .29 .39 6 .23 .25 .11 .22 .39 1 2 3 4 5 6 1 .24 .22 .37 .34 .23 2 .24 .15 .20 .14 .25 3 .22 .15 .15 .28 .11 4 .37 .20 .15 .29 .22 5 .34 .14 .28 .29 .39 6 .23 .25 .11 .22 .39 1 2 3 4 5 6 1 .24 .22 .37 .34 .23 2 .24 .15 .20 .14 .25 3 .22 .15 .15 .28 .11 4 .37 .20 .15 .29 .22 5 .34 .14 .28 .29 .39 6 .23 .25 .11 .22 .39 1 2 3 4 5 6 1 .24 .22 .37 .34 .23 2 .24 .15 .20 .14 .25 3 .22 .15 .15 .28 .11 4 .37 .20 .15 .29 .22 5 .34 .14 .28 .29 .39 6 .23 .25 .11 .22 .39
Original Points Two Clusters
Original Points Two Clusters
Nested Clusters Dendrogram
3 6 4 1 2 5 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4
1 2 3 4 5 6 1 2 5 3 4
1 2 3 4 5 6 1 .24 .22 .37 .34 .23 2 .24 .15 .20 .14 .25 3 .22 .15 .15 .28 .11 4 .37 .20 .15 .29 .22 5 .34 .14 .28 .29 .39 6 .23 .25 .11 .22 .39 1 2 3 4 5 6 1 .24 .22 .37 .34 .23 2 .24 .15 .20 .14 .25 3 .22 .15 .15 .28 .11 4 .37 .20 .15 .29 .22 5 .34 .14 .28 .29 .39 6 .23 .25 .11 .22 .39 1 2 3 4 5 6 1 .24 .22 .37 .34 .23 2 .24 .15 .20 .14 .25 3 .22 .15 .15 .28 .11 4 .37 .20 .15 .29 .22 5 .34 .14 .28 .29 .39 6 .23 .25 .11 .22 .39 1 2 3 4 5 6 1 .24 .22 .37 .34 .23 2 .24 .15 .20 .14 .25 3 .22 .15 .15 .28 .11 4 .37 .20 .15 .29 .22 5 .34 .14 .28 .29 .39 6 .23 .25 .11 .22 .39 1 2 3 4 5 6 1 .24 .22 .37 .34 .23 2 .24 .15 .20 .14 .25 3 .22 .15 .15 .28 .11 4 .37 .20 .15 .29 .22 5 .34 .14 .28 .29 .39 6 .23 .25 .11 .22 .39 1 2 3 4 5 6 1 .24 .22 .37 .34 .23 2 .24 .15 .20 .14 .25 3 .22 .15 .15 .28 .11 4 .37 .20 .15 .29 .22 5 .34 .14 .28 .29 .39 6 .23 .25 .11 .22 .39 1 2 3 4 5 6 1 .24 .22 .37 .34 .23 2 .24 .15 .20 .14 .25 3 .22 .15 .15 .28 .11 4 .37 .20 .15 .29 .22 5 .34 .14 .28 .29 .39 6 .23 .25 .11 .22 .39 1 2 3 4 5 6 1 .24 .22 .37 .34 .23 2 .24 .15 .20 .14 .25 3 .22 .15 .15 .28 .11 4 .37 .20 .15 .29 .22 5 .34 .14 .28 .29 .39 6 .23 .25 .11 .22 .39 1 2 3 4 5 6 1 .24 .22 .37 .34 .23 2 .24 .15 .20 .14 .25 3 .22 .15 .15 .28 .11 4 .37 .20 .15 .29 .22 5 .34 .14 .28 .29 .39 6 .23 .25 .11 .22 .39 1 2 3 4 5 6 1 .24 .22 .37 .34 .23 2 .24 .15 .20 .14 .25 3 .22 .15 .15 .28 .11 4 .37 .20 .15 .29 .22 5 .34 .14 .28 .29 .39 6 .23 .25 .11 .22 .39 1 2 3 4 5 6 1 .24 .22 .37 .34 .23 2 .24 .15 .20 .14 .25 3 .22 .15 .15 .28 .11 4 .37 .20 .15 .29 .22 5 .34 .14 .28 .29 .39 6 .23 .25 .11 .22 .39
Original Points Two Clusters
Original Points Two Clusters
between points in the two clusters.
proximity favors large clusters
| |Cluster | |Cluster ) p , p proximity( ) Cluster , Cluster proximity(
j i Cluster p Cluster p j i j i
j j i i
1 2 3 4 5 6 1 .24 .22 .37 .34 .23 2 .24 .15 .20 .14 .25 3 .22 .15 .15 .28 .11 4 .37 .20 .15 .29 .22 5 .34 .14 .28 .29 .39 6 .23 .25 .11 .22 .39
Nested Clusters Dendrogram
3 6 4 1 2 5 0.05 0.1 0.15 0.2 0.25
1 2 3 4 5 6 1 2 5 3 4
1 2 3 4 5 6 1 .24 .22 .37 .34 .23 2 .24 .15 .20 .14 .25 3 .22 .15 .15 .28 .11 4 .37 .20 .15 .29 .22 5 .34 .14 .28 .29 .39 6 .23 .25 .11 .22 .39
distance squared
Group Average Ward’s Method 1 2 3 4 5 6 1 2 5 3 4 MIN MAX 1 2 3 4 5 6 1 2 5 3 4 1 2 3 4 5 6 1 2 5 3 4 1 2 3 4 5 6 1 2 3 4 5
proximity matrix must be updated and searched
some approaches
cannot be undone
the following:
into dense regions separated by not-so-dense regions.
points
number of points (MinPts) within Eps
a cluster
is in the neighborhood of a core point.
border point.
Original Points Point types: core, border and noise Eps = 10, MinPts = 4
points q and p if they are within distance Eps.
point q if there is a path of edges from p to q
p q p1 p q
are at roughly the same distance
neighbor
Eps ~ 7-10 MinPts = 4
Original Points Clusters
Original Points
(MinPts=4, Eps=9.75). (MinPts=4, Eps=9.92)
problem
summary of the data, and then clusters the leaves.
link analysis
information theoretic tools.
representation of the cluster
and interconnectivity for merging
clusters?
0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x y
Random Points
0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x y
K-means
0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x y
DBSCAN
0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x y
Complete Link
1. Determining the clustering tendency of a set of data, i.e., distinguishing whether non-random structure actually exists in the data. 2. Comparing the results of a cluster analysis to externally known results, e.g., to externally given class labels. 3. Evaluating how well the results of a cluster analysis fit the data without reference to external information.
4. Comparing the results of two different sets of cluster analyses to determine which is better. 5. Determining the ‘correct’ number of clusters. For 2, 3, and 4, we can further distinguish whether we want to evaluate the entire clustering or just individual clusters.
match externally supplied class labels.
structure without reference to external information.
clusters.
entropy
numerical measure that implements the criterion.
n(n-1) / 2 entries needs to be calculated. 𝐷𝑝𝑠𝑠𝐷𝑝𝑓𝑔𝑔(𝑌, 𝑍) = 𝑗(𝑦𝑗 − 𝜈𝑌)(𝑧𝑗 − 𝜈𝑍) 𝑗 𝑦𝑗 − 𝜈𝑌 2 𝑗 𝑧𝑗 − 𝜈𝑍 2
indicates that points that belong to the same cluster are close to each other.
clusters.
0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x y
0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x y
Corr = -0.9235 Corr = -0.5810
labels and inspect visually.
0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x y Points Points
20 40 60 80 100 10 20 30 40 50 60 70 80 90 100 Similarity 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
𝑡𝑗𝑛(𝑗, 𝑘) = 1 − 𝑒𝑗𝑘 − 𝑒𝑛𝑗𝑜 𝑒𝑛𝑏𝑦 − 𝑒𝑛𝑗𝑜
Points Points
20 40 60 80 100 10 20 30 40 50 60 70 80 90 100 Similarity 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
DBSCAN
0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x y
Points Points
20 40 60 80 100 10 20 30 40 50 60 70 80 90 100 Similarity 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
K-means
0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x y
0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x y Points Points
20 40 60 80 100 10 20 30 40 50 60 70 80 90 100 Similarity 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Complete Link
1 2 3 5 6 4 7
DBSCAN
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
quadratic computation
clustering structure without reference to external information
(average SSE).
2 5 10 15 20 25 30 1 2 3 4 5 6 7 8 9 10
K SSE
5 10 15
2 4 6
i C x i
i
c x WSS
2
) (
i i i
c c m BSS
2
) (
We want this to be small We want this to be large
i j
C x C y
y x BSS
2
) (
cohesion and separation.
and nodes outside the cluster.
cohesion separation
fair, or poor?
valid structure in the data
clusterings to those of a clustering result.
analyses, a framework is less necessary.
index values is significant
100 random points distributed in the range 0.2 – 0.8 for x and y
0.016 0.018 0.02 0.022 0.024 0.026 0.028 0.03 0.032 0.034 5 10 15 20 25 30 35 40 45 50
SSE Count
0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x y
K-means clusterings of the following two data sets.
0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x y
0.2 0.4 0.6 0.8 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x y
Corr = -0.9235 Corr = -0.5810
measurements in the random data that have value less or equal than value v (or greater or equal if we want to maximize)
that in the real data
dataset?
the number of clusters to be specified (e.g., DBSCAN)
2 5 10 15 20 25 30 1 2 3 4 5 6 7 8 9 10
K SSE
1 2 3 5 6 4 7
SSE of clusters found using K-means
labels
according to their income, politicians classified according to the political party.
with respect to classes
𝑘 = points in class j
Class 1 Class 2 Class 3 Cluster 1
𝑜11 𝑜12 𝑜13 𝑛1
Cluster 2
𝑜21 𝑜22 𝑜23 𝑛2
Cluster 3
𝑜31 𝑜32 𝑜33 𝑛3 𝑑1 𝑑2 𝑑3 𝑜
Class 1 Class 2 Class 3 Cluster 1
𝑞11 𝑞12 𝑞13 𝑛1
Cluster 2
𝑞21 𝑞22 𝑞23 𝑛2
Cluster 3
𝑞31 𝑞32 𝑞33 𝑛3 𝑑1 𝑑2 𝑑3 𝑜
𝑀
𝑞𝑗𝑘 log 𝑞𝑗𝑘
𝐿 𝑛𝑗 𝑜 𝑓𝑗
𝑘
𝑞𝑗𝑘
𝐿 𝑛𝑗 𝑜 𝑞𝑗
Class 1 Class 2 Class 3 Cluster 1
𝑞11 𝑞12 𝑞13 𝑛1
Cluster 2
𝑞21 𝑞22 𝑞23 𝑛2
Cluster 3
𝑞31 𝑞32 𝑞33 𝑛3 𝑑1 𝑑2 𝑑3 𝑜
𝑜𝑗𝑘 𝑑𝑘
𝐺 𝑗, 𝑘 = 2 ∗ 𝑄𝑠𝑓𝑑 𝑗, 𝑘 ∗ 𝑆𝑓𝑑(𝑗, 𝑘) 𝑄𝑠𝑓𝑑 𝑗, 𝑘 + 𝑆𝑓𝑑(𝑗, 𝑘)
Class 1 Class 2 Class 3 Cluster 1
𝑞11 𝑞12 𝑞13 𝑛1
Cluster 2
𝑞21 𝑞22 𝑞23 𝑛2
Cluster 3
𝑞31 𝑞32 𝑞33 𝑛3 𝑑1 𝑑2 𝑑3 𝑜
𝑘
𝑜𝑗𝑘
𝑜𝑗𝑙𝑗 𝑛𝑗
𝑛𝑗 𝑜 𝑄𝑠𝑓𝑑(𝑗)
𝑜𝑗𝑙𝑗 𝑑𝑙𝑗
𝑛𝑗 𝑜 𝑆𝑓𝑑(𝑗)
Class 1 Class 2 Class 3 Cluster 1
𝑜11 𝑜12 𝑜13 𝑛1
Cluster 2
𝑜21 𝑜22 𝑜23 𝑛2
Cluster 3
𝑜31 𝑜32 𝑜33 𝑛3 𝑑1 𝑑2 𝑑3 𝑜
Precision/Recall for clusters and clusterings
Class 1 Class 2 Class 3 Cluster 1
20 35 35 90
Cluster 2
30 42 38 110
Cluster 3
38 35 27 100 100 100 100 300
Class 1 Class 2 Class 3 Cluster 1
2 3 85 90
Cluster 2
90 12 8 110
Cluster 3
8 85 7 100 100 100 100 300 Purity: (0.94, 0.81, 0.85) – overall 0.86 Precision: (0.94, 0.81, 0.85) – overall 0.86 Recall: (0.85, 0.9, 0.85)
Purity: (0.38, 0.38, 0.38) – overall 0.38 Precision: (0.38, 0.38, 0.38) – overall 0.38 Recall: (0.35, 0.42, 0.38) – overall 0.39
Class 1 Class 2 Class 3 Cluster 1
35 35
Cluster 2
50 77 38 165
Cluster 3
38 35 27 100 100 100 100 300 Cluster 1: Purity: 1 Precision: 1 Recall: 0.35