informational and computational limits of clustering
play

Informational and Computational Limits of Clustering and other - PowerPoint PPT Presentation

Informational and Computational Limits of Clustering and other questions about clustering Nati Srebro University of Toronto based on work in progress with Gregory Shakhnarovich Sam Roweis Brown University University of Toronto


  1. Informational and Computational Limits of Clustering and other questions about clustering Nati Srebro University of Toronto based on work in progress with Gregory Shakhnarovich Sam Roweis Brown University University of Toronto

  2. “Clustering” • Clustering with respect to a specific model / structure / objective • Gaussian mixture model – Each point comes from one of k “centers” – Gaussian cloud around each center – For now: unit-variance Gaussians, uniform prior over choice of center • As an optimization problem: – Likelihood of centers: Σ i log( Σ j exp -(x i - µ j ) 2 /2 ) – k -means objective—Likelihood of assignment: Σ i min j (x i - µ j ) 2

  3. Is Clustering Hard or Easy? • k -means (and ML estimation?) is NP-hard – For some point configurations, it is hard to find the optimal solution. – But do these point configurations actually correspond to clusters of points?

  4. Is Clustering Hard or Easy? • k -means (and ML estimation?) is NP-hard – For some point configurations, it is hard to find the optimal solution. – But do these point configurations actually correspond to clusters of points? • Well separated Gaussian clusters, lots of data – Poly time algorithms for very large separation, #points – Empirically, EM* works (modest separation, #points) *EM with some bells and whistles: spectral projection (PCA), pruning centers, etc

  5. Is Clustering Hard or Easy? (when its interesting) • k -means (and ML estimation?) is NP-hard – For some point configurations, it is hard to find the optimal solution. – But do these point configurations actually correspond to clusters of points? • Well separated Gaussian clusters, lots of data – Poly time algorithms for very large separation, #points – Empirically, EM* works (modest separation, #points) • Not enough data – Can’t identify clusters (ML clustering meaningless) *EM with some bells and whistles: spectral projection (PCA), pruning centers, etc

  6. Effect of “Signal Strength” Large separation, Lots of data— More samples true solution creates distinct peak. Easy to find. Not enough data— “optimal” solution is Small separation, meaningless. Less samples

  7. Effect of “Signal Strength” Large separation, Lots of data— More samples true solution creates distinct peak. Easy to find. Just enough data— optimal solution is meaningful, but hard to find? Not enough data— “optimal” solution is Small separation, meaningless. Less samples

  8. Effect of “Signal Strength” Large separation, Lots of data— More samples true solution creates distinct peak. Easy to find. Computational limit ~ Just enough data— optimal solution is meaningful, but hard to find? Informational ~ limit Not enough data— “optimal” solution is Small separation, meaningless. Less samples

  9. Effect of “Signal Strength” Infinite data limit: E x [cost(x;model)] = KL(true||model) Mode always at true model Determined by • number of clusters (k) • dimensionality (d) • separation (s) true model

  10. Effect of “Signal Strength” Infinite data limit: E x [cost(x;model)] = KL(true||model) Mode always at true model Determined by • number of clusters (k) • dimensionality (d) • separation (s) Actual log-likelihood true model Also depends on: • sample size (n) − 1 1 N( true ; n J ) “local ML model” ~ Fisher [Redner Walker 84]

  11. Informational and Computational Limits sample size (n) Enough information to reconstruct N o t e n o u g h i n f o r m ( a M t i o L n s o t o l u r e t i o c n o n i s s t r r a u n c d t o m ) separation (s)

  12. Informational and Computational Limits sample size (n) Enough information to reconstruct Enough information to efficiently reconstruct N o t e n o u g h i n f o r m ( a M t i o L n s o t o l u r e t i o c n o n i s s t r r a u n c d t o m ) separation (s)

  13. Informational and Computational Limits sample size (n) Enough information to reconstruct Enough information to efficiently reconstruct N o t e n o u g h i n f o r m ( a M t i o L n s o t o l u r e t i o c n o n i s s t r r a u n c d t o m ) separation (s)

  14. Informational and Computational Limits sample size (n) Enough information to reconstruct Enough information to efficiently reconstruct N o t e n o u g h i n f o r m ( a M t i o L n s o t o l u r e t i o c n o n i s s t r r a u n c d t o m ) separation (s)

  15. Informational and Computational Limits • What are the informational and computational limits? sample size (n) • Is there a gap? • Is there some minimum required separation for computational tractability? • Is the learning the centers always easy given the true distribution? Analytic, quantitative answers. Independent of specific algorithm / estimator separation (s)

  16. Behavior as a function of Sample Size k=16 d=1024 sep=6 σ “fair” EM 0.12 EM from true centers 0.1 Max likelihood (fair or not) label error 0.08 True centers 0.06 0.04 0.02 0 100 300 1000 3000 sample size

  17. Behavior as a function of Sample Size k=16 d=1024 sep=6 σ “fair” EM 0.12 EM from true centers 0.1 Max likelihood (fair or not) label error 0.08 True centers 0.06 0.04 0.02 0 5 Difference between likelihood of “fair” 4 EM runs and EM from true centers bits/sample 3 each run (random init) 2 run attaining max likelihood 1 0 -1 -2 100 300 1000 3000 sample size

  18. Clustering Model of clustering What structure are we trying to capture? What properties do we expect the data to have? What are we trying to get out of it? What is a “good clustering”? Empirical objective and evaluation (e.g. minimization objective) Can it be used to recover the clustering (as specified above)? Post-hoc analysis: is what we found “real”? Algorithm How well does it achieve objective? How efficient is it? Under what circumstances?

  19. Clustering Model of clustering What structure are we trying to capture? What properties do we expect the data to have? What are we trying to get out of it? Questions What is a “good clustering”? about the world Mathematics Empirical objective and evaluation (e.g. minimization objective) Can it be used to recover the clustering (as specified above)? Post-hoc analysis: is what we found “real”? Algorithm How well does it achieve objective? How efficient is it? Under what circumstances?

  20. Clustering Model of clustering What structure are we trying to capture? What properties do we expect the data to have? What are we trying to get out of it? Questions What is a “good clustering”? about the world Mathematics Empirical objective and evaluation (e.g. minimization objective) Can it be used to recover the clustering (as specified above)? Post-hoc analysis: is what we found “real”? Algorithm How well does it achieve objective? How efficient is it? Under what circumstances?

  21. Clustering Model of clustering What structure are we trying to capture? What properties do we expect the data to have? What are we trying to get out of it? Questions What is a “good clustering”? about the world Mathematics Empirical objective and evaluation (e.g. minimization objective) Can it be used to recover the clustering (as specified above)? Post-hoc analysis: is what we found “real”? Algorithm How well does it achieve objective? How efficient is it? Under what circumstances?

  22. Clustering Model of clustering What structure are we trying to capture? What properties do we expect the data to have? What are we trying to get out of it? Questions What is a “good clustering”? about the world Mathematics Empirical objective and evaluation (e.g. minimization objective) Can it be used to recover the clustering (as specified above)? Post-hoc analysis: is what we found “real”? Algorithm How well does it achieve objective? How efficient is it? Under what circumstances?

  23. Clustering Model of clustering What structure are we trying to capture? What properties do we expect the data to have? What are we trying to get out of it? Questions What is a “good clustering”? about the world Mathematics Empirical objective and evaluation (e.g. minimization objective) Can it be used to recover the clustering (as specified above)? Post-hoc analysis: is what we found “real”? Can what we found generalize? Algorithm How well does it achieve objective? How efficient is it? Under what circumstances?

  24. “Clustering is Easy”, take 1: Approximation Algorithms (1+ ε )-Approximation for k-means in time O(2 (k/ ε )const nd) [Kumar Sabharwal Sen 2004] For any data set of points, find clustering with k-means cost · (1+ ε ) × cost-of-optimal-clustering

  25. “Clustering is Easy”, take 1: Approximation Algorithms (1+ ε )-Approximation for k-means in time O(2 (k/ ε )const nd) [Kumar Sabharwal Sen 2004] µ 1 = ( 5,0,0,0,…,0) 0.5 N( µ 1 , I ) + 0.5 N( µ 2 , I ) µ 2 = (-5,0,0,0,…,0) cost([ µ 1 , µ 2 ]) ≈ ∑ i min j (x i - µ j ) 2 ≈ d · n cost([0,0]) ≈ ∑ i min j (x i -0) 2 ≈ (d+25) · n ⇒ [0,0] is a (1+25/d)-approximation Need ε < sep 2 /d, time becomes O(2 (kds)const n)

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend