applied machine learning
play

Applied Machine Learning Clustering Siamak Ravanbakhsh COMP 551 - PowerPoint PPT Presentation

Applied Machine Learning Clustering Siamak Ravanbakhsh COMP 551 (Fall 2020) Learning objectives what is clustering and when is it useful? what are the different types of clustering? some clustering algorithms: k-means, k-medoid, DB-SCAN,


  1. Applied Machine Learning Clustering Siamak Ravanbakhsh COMP 551 (Fall 2020)

  2. Learning objectives what is clustering and when is it useful? what are the different types of clustering? some clustering algorithms: k-means, k-medoid, DB-SCAN, hierarchical clustering

  3. Motivation for many applications we want to classify the data without having any labels (an unsupervised learning task) categories of shoppers or items based on their shopping patterns communities in a social network

  4. Motivation for many applications we want to classify the data without having any labels (an unsupervised learning task) categories of shoppers or items based on their shopping patterns communities in a social network categories of stars or galaxies based on light profile, mass, age, etc. categories of minerals based on spectroscopic measurements categories of webpages in meta-search engines categories of living organisms based on their genome ... COMP 551 | Fall 2020

  5. What is a cluster? a subset of entities that are similar to each other and different from other entities we can try and organize clustering methods based on form of input data types of cluster / task general methodology

  6. Types of input 1.features X ∈ R N × D 2. pairwise distances or similarities D ∈ R N × N we can often produce similarities from features infeasible for very large D 3. attributed graphs node attribute is similar to feature in the first family edge attribute can represent similarity or distance

  7. Types of cluster / task partitioning or hard clusters soft fuzzy membership overlapping cluster 1 cluster 2

  8. Types of cluster / task other categories! hierarchical clustering with hard, soft, or overlapping membership example it is customary to use a dendrogram to tree of life (clustering of genotypes) represent hierarchical clustering

  9. Types of cluster / task other categories! co-clustering or biclustering: simultaneous clustering of instances and features examples we can re-order the rows of X such that points in the same co-clustering of user-items in online stores cluster appear next to each other. Same for the features. conditions and gene expressions ... below: co-clustering if mamals and their features COMP 551 | Fall 2020

  10. 1 Centroid methods identify centers, prototypes or exemplars of each cluster early use of clustering in psychology example cluster centers are shown on the map a hierarchical clustering with level of hierarchy depending on the zoom level image: Frey & Dudek'00 K-means is an example of a centroid method

  11. K-means clustering: objective idea partition the data into K-clusters to minimize the sum of distance to the cluster mean/center equivalent to minimizing the within cluster distances ( n ) ∑ n x r μ = n , k cluster center (mean) number of points number of clusters k ∑ n r n , k cost function N K ( n ) 2 J ({ r }, { μ }) = ∣∣ x − μ ∣∣ ∑ n =1 ∑ k =1 r n , k n , k 2 k k { 1 point n belongs to cluster k = cluster membership r n , k 0 otherwise we need to find cluster memberships and cluster centers how to minimize the cost?

  12. K-means clustering: algorithm idea: iteratively update cluster memberships and cluster centers { μ } start with some cluster centers k repeat until convergence: { 1 ( n ) 2 k = arg min ∣∣ x − μ ∣∣ t c 2 c ← assign each point to the closest center: r n , k 0 otherwise ( n ) ∑ n x r μ ← n , k re-calculate the center of the cluster: k ∑ n r n , k since each iteration can only reduce the cost, the algorithm has to stop

  13. K-means clustering: algorithm example iterations of k-means (K=2) for 2D data. Two steps in each iteration are shown. the cost decreases at each step

  14. K-means clustering: derivation N K ( n ) 2 why this procedure minimizes the cost? J ({ r }, { μ }) = ∣∣ x − μ ∣∣ ∑ n =1 ∑ k =1 r n , k n , k 2 k k { r } { μ } 1. fix memberships and optimize centers n , k k ∂ ∑ n μ k set the derivative wrt to zero: ∂ ( n ) 2 J = ∣∣ x − μ ∣∣ = 0 r J ( μ ) n , k 2 k ∂ μ k ∂ μ k k ( n ) 2 ∑ n ( x − μ ) = 0 r n , k k ( n ) ∑ n x r μ = n , k k ∑ n r n , k 2. fix centers and optimize memberships { r ∂ { μ } } J = 0 n , k k ∂ μ k μ k finding the "closest" center minimizes the cost { 1 ( n ) 2 k = arg min ∣∣ x − μ ∣∣ c c 2 ← r n , k 0 otherwise 3. repeat 1 & 2 until convergence

  15. K-means clustering: complexity { μ } start with some cluster centers k repeat until convergence: { 1 ( n ) 2 k = arg min ∣∣ x − μ ∣∣ c c 2 ← assign each point to the closest center: r n , k 0 otherwise ( n ) ∑ n x r μ ← n , k re-calculate the center of the cluster: k ∑ n r n , k calculating the mean of all cluster O ( ND ) calculating distance of a node to center , number of features O ( D ) we do this for each point (n) and each center (k) total cost is O ( NKD )

  16. K-means clustering: performance K-means' alternating minimization finds a local minimum we'll come back to this later! different initialization of cluster centers gives different clustering example : Iris flowers dataset (also interesting to compare to true class labels) cost: 37.05 cost: 37.08 even if the clustering is the same we could swap cluster indices (colors)

  17. K-means clustering: initialization optional K-means' alternative minimization finds a local minimum different initialization gives different clustering: run many times and pick the clustering with the lowest cost use good heuristics for initialization: K-means++ initialization pick a random data-point to be the first center calculate the distance of each point to the nearest center d n 2 pick a new point as a new center with prob d n p ( n ) = 2 ∑ i d i often faster convergence to better solutions the clustering is within x optimal solution O (log( K ))

  18. Application: vector quantization ( n ) R D given a dataset of vectors (1) ( N ) ∈ D = { x , … , x } x storage O ( NDC ) C is the number of bits for a scalar (e.g., 32bits) compress the data using k-means: image: Frey and Dudek-00 replacing each data-point with its cluster center store only the cluster centers and indices O ( KDC + N log( K )) ( n ) R 3 apply this to compress images (denote each pixel by ) ∈ x COMP 551 | Fall 2020

  19. K-medoids K-means objective minimizes squared Euclidean distance the minimizer for a set of points is given by their mean ′ ′ D ( x , x ) = ∣ x − x ∣ ∑ d i i if we use Manhattan distance the minimizer is the median (K-medians) for general distance functions the minimizer doesn't have a closed form (computationally expensive) pick the cluster center from the points themselves ( medeoids ) solution K-medoid objective N K ( n ) (1) ( N ) J ({ r }, { μ }) = ∑ n =1 ∑ k =1 dist ( x , μ ) and μ ∈ { x , … , x } r n , k n , k k k k algorithm assign each point to the "closest" center set the point with the min. overall distance to other points as the center of the cluster

  20. K-medoids pick the cluster center from the points themselves ( medeoids ) solution K-medoid objective N K ( n ) (1) ( N ) J ({ r }, { μ }) = ∑ n =1 ∑ k =1 dist ( x , μ ) and μ ∈ { x , … , x } r n , k n , k k k k algorithm assign each point to the "closest" center set the point with the min. overall distance to other points as the center of the cluster example finding key air-travel hubs (as medeoids) K-medoid also makes sense when the input is graph (nodes become centers) Frey and Dudek'00 COMP 551 | Fall 2020

  21. Density based methods dense regions define clusters a notable method is density-based spatial clustering of applications with noise ( DB-SCAN ) K-means DB-SCAN geospatial clustering astronomical data image credit: wiki, https://doublebyteblog.wordpress.com

  22. DB-SCAN ϵ points that have more than C neighbors in -neighborhood are called core points if we connect nearby core points we get a graph connected components of the graph give us clusters all the other points are either: ϵ -close to a core, so belong to that cluster labeled as noise ϵ C = 4 image credit: wiki COMP 551 | Fall 2020

  23. Hierarchical clustering heuristics bottom-up hierarchical clustering ( agglomerative clustering) start from each item as its own cluster merge most similar clusters top-down hierarchical clustering ( divisive clustering ) start from having one big cluster at each iteration pick the "widest" cluster and split it (e.g. using k-means) these methods often do not optimize a specific objective function (hence heuristics) they are often too expensive for very large datasets

  24. Agglomerative clustering start from each item as its own cluster merge most similar clusters ← { n }, n ∈ {1, … , N } initialize clusters C n initialize set of clusters available for merging A ← {1, … , N } for t = 1, … ′ pick two clusters that a most similar i , j ← arg min distance( c , c ) c , c ∈ A ′ ← C ∪ C C j merge them to get a new cluster t + N i if contains all nodes, we are done! C t + N A ← A ∪ { t + N }\{ i , j } update clusters available for merging distance( t + N , n ) ∀ n ∈ A calculate dissimilarities for the new cluster how to define dissimilarity or distance of two clusters?

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend