em k means gng
play

EM, K-Means, GNG 4-6-16 Reading Quiz Which of the following can be - PowerPoint PPT Presentation

EM, K-Means, GNG 4-6-16 Reading Quiz Which of the following can be considered an instance of the EM algorithm? a) Agglomerative clustering b) Divisive clustering c) K-means clustering d) Growing neural gas EM algorithm E step:


  1. EM, K-Means, GNG 4-6-16

  2. Reading Quiz Which of the following can be considered an instance of the EM algorithm? a) Agglomerative clustering b) Divisive clustering c) K-means clustering d) Growing neural gas

  3. EM algorithm E step: “expectation” … terrible name ● Classify the data using the current model. M step: “maximization” … slightly less terrible name ● Generate the best model using the current classification of the data. Initialize the model, then alternate E and M steps until convergence.

  4. K-means algorithm Model: k clusters each represented by a centroid. E step: ● Assign each point to the closest centroid. M step: ● Move each centroid to the mean of the points assigned to it. Convergence: we ran an E step where no points had their assignment changed.

  5. Initializing k-means Reasonable options: 1) Start with a random E step. ● Randomly assign each point to a cluster in {1, 2, …, k}. 2) Start with a random M step. a) Pick random centroids within the maximum range of the data. b) Pick random data points to use as initial centroids.

  6. K-means in action

  7. Other examples of EM ● Naive bayes soft clustering (from the reading) ● Gaussian mixture model clustering

  8. Gaussian mixture models A Gaussian distribution is a multivariate generalization of a normal distribution (the classic bell curve). A Gaussian mixture is a distribution comprised of several independent Gaussians. If we model our data as a Gaussian mixture, we’re saying that each data point was a random draw from one of several Gaussian distributions (but we may not know which).

  9. EM for Gaussian mixture models Model: data drawn from a mixture of k Gaussians E step: ● Compute the (log) likelihood of the data ○ Each point’s probability of being drawn from each Gaussian. M step: ● Update the mean and covariance of each Gaussian. ○ Weighted by how responsible that Gaussian was for each data point.

  10. How do we pick k? There’s no hard rule. ● Sometimes the application for which the clusters will be used dictates k. ● If k can be flexible, then we need to consider the tradeoffs: ○ Higher k will always decrease the error (increase the likelihood). ○ Lower k will always produce a simpler model.

  11. Growing neural gas 0) Start with two random connected nodes, then repeat 1...9: 1) Pick a random data point. 2) Find the two closest nodes to the data point. 3) Increment the age of all edges from the closest node. 4) Add the squared distance to the error of the closest node. 5) Move the closest node and all of its neighbors toward the data point. ● Move the closest node more than its neighbors. 6) Connect the two closest nodes or reset their edge age. 7) Remove old edges; if a node is isolated, delete it. 8) Every λ iterations, add a new node. ● Between the highest-error node and its highest-error neighbor 9) Decay all errors.

  12. Adjusting nodes based on one data point

  13. Adjusting nodes based on one data point These node’s error increases. These edges get aged.

  14. Every λ iterations, add a new node Highest error node. Highest error neighbor.

  15. Growing neural gas 0) Start with two random connected nodes, then repeat 1...9: 1) Pick a random data point. 2) Find the two closest nodes to the data point. 3) Increment the age of all edges from the closest node. 4) Add the squared distance to the error of the closest node. 5) Move the closest node and all of its neighbors toward the data point. ● Move the closest node more than its neighbors. 6) Connect the two closest nodes or reset their edge age. 7) Remove old edges. 8) Every λ iterations, add a new node. ● Between the highest-error node and its highest-error neighbor 9) Decay all errors.

  16. Growing neural gas in action

  17. Discussion question What unsupervised learning problem is growing neural gas solving? Is it clustering? Is it dimensionality reduction? Is it something else?

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend