15 388 688 practical data science anomaly detection and
play

15-388/688 - Practical Data Science: Anomaly detection and mixture - PowerPoint PPT Presentation

15-388/688 - Practical Data Science: Anomaly detection and mixture of Gaussians J. Zico Kolter Carnegie Mellon University Spring 2018 1 Outline Anomalies and outliers Multivariate Gaussian Mixture of Gaussians 2 Outline Anomalies and


  1. 15-388/688 - Practical Data Science: Anomaly detection and mixture of Gaussians J. Zico Kolter Carnegie Mellon University Spring 2018 1

  2. Outline Anomalies and outliers Multivariate Gaussian Mixture of Gaussians 2

  3. Outline Anomalies and outliers Multivariate Gaussian Mixture of Gaussians 3

  4. What is an β€œanomaly” Two views of anomaly detection Supervised view: anomalies are what some user labels as anomalies Unsupervised view: anomalies are outliers (points of low probability) in the data In reality, you want a combination of both these viewpoints: not all outliers are anomalies, but all anomalies should be outliers This lecture is going to focus on the unsupervised view, but this is only part of the full equation 4

  5. What is an outlier? Outliers are points of low probability Given a collection of data points 𝑦 1 , … , 𝑦 ν‘š , describe the points using some distribution, then find points with lowest π‘ž 𝑦 ν‘– Since we are considering points with no labels, this is an unsupervised learning algorithm (could formulate in terms of hypothesis, loss, optimization, but instead for this lecture we’ll be focusing on the probabilistic notation) 5

  6. Outline Anomalies and outliers Multivariate Gaussian Mixture of Gaussians 6

  7. Multivariate Gaussian distributions We have seen Gaussian distributions previously, but mainly focused on distributions over scalar-valued data 𝑦 ν‘– ∈ ℝ βˆ’ 𝑦 βˆ’ 𝜈 2 π‘ž 𝑦; 𝜈, 𝜏 2 = 2𝜌𝜏 2 βˆ’1/2 exp 2𝜏 2 Gaussian distributions generalize nicely to distributions over vector-valued random variables π‘Œ taking values in ℝ ν‘› βˆ’ 1 π‘ž 𝑦; 𝜈, Ξ£ = 2𝜌Σ βˆ’1/2 exp 2 𝑦 βˆ’ 𝜈 푇 Ξ£ βˆ’1 𝑦 βˆ’ 𝜈 ≑ π’ͺ 𝑦; 𝜈, Ξ£ with parameters 𝜈 ∈ ℝ ν‘› and Ξ£ ∈ ℝ ν‘›Γ—ν‘› , and were β‹… denotes the determinant of a matrix (also written π‘Œ ∼ π’ͺ 𝜈, Ξ£ ) 7

  8. Properties of multivariate Gaussians Mean and variance 𝐅 π‘Œ = ∫ 𝑦π’ͺ 𝑦; 𝜈, Ξ£ 𝑒𝑦 = 𝜈 ℝ ν‘› 𝑦 βˆ’ 𝜈 푇 π’ͺ 𝑦; 𝜈, Ξ£ 𝑒𝑦 = Ξ£ 𝐃𝐩𝐰 π‘Œ = ∫ 𝑦 βˆ’ 𝜈 ℝ ν‘› (these are not obvious) Creation from univariate Gaussians: for 𝑦 ∈ ℝ , if π‘ž 𝑦 ν‘– = π’ͺ 𝑦; 0,1 (i.e., each element 𝑦 ν‘– is an independent univariate Gaussian, then 𝑧 = 𝐡𝑦 + 𝑐 is also normal, with distribution 𝑍 ∼ π’ͺ 𝜈 = 𝑐, Ξ£ = 𝐡𝐡 푇 8

  9. Multivariate Gaussians, graphically 3 𝜈 = βˆ’4 2.0 0.5 Ξ£ = 0.5 1.0 9

  10. Multivariate Gaussians, graphically 3 𝜈 = βˆ’4 2.0 0 Ξ£ = 0 1.0 10

  11. Multivariate Gaussians, graphically 3 𝜈 = βˆ’4 2.0 1.0 Ξ£ = 1.0 1.0 11

  12. Multivariate Gaussians, graphically 3 𝜈 = βˆ’4 2.0 1.4 Ξ£ = 1.4 1.0 12

  13. Multivariate Gaussians, graphically 3 𝜈 = βˆ’4 2.0 βˆ’1.0 Ξ£ = βˆ’1.0 1.0 13

  14. Maximum likelihood estimation The maximum likelihood estimate of 𝜈, Ξ£ are what you would β€œexpect”, but derivation is non-obvious ν‘š log π‘ž(𝑦 ν‘– ; 𝜈, Ξ£) minimize β„“ 𝜈, Ξ£ = βˆ‘ νœ‡,Ξ£ ν‘–=1 ν‘š βˆ’ 1 2 log 2𝜌Σ βˆ’ 1 2 𝑦 ν‘– βˆ’ 𝜈 푇 Ξ£ βˆ’1 𝑦 ν‘– βˆ’ 𝜈 = βˆ‘ ν‘–=1 Taking gradients with respect to 𝜈 and Ξ£ and setting equal to zero give the closed-form solutions ν‘š ν‘š 𝜈 = 1 Ξ£ = 1 𝑦 ν‘– , 𝑦 ν‘– βˆ’ 𝜈 𝑦 ν‘– βˆ’ 𝜈 푇 𝑛 βˆ‘ 𝑛 βˆ‘ ν‘–=1 ν‘–=1 14

  15. Fitting Gaussian to MNIST Σ = 𝜈 = 15

  16. MNIST Outliers 16

  17. Outline Anomalies and outliers Multivariate Gaussian Mixture of Gaussians 17

  18. Limits of Gaussians Though useful, multivariate Gaussians are limited in the types of distributions they can represent 18

  19. Mixture models A more powerful model to consider is a mixture of Gaussian distributions, a distribution where we first consider a categorical variable 𝜚 ∈ 0,1 ν‘˜ , βˆ‘ π‘Ž ∼ Categorical 𝜚 , 𝜚 ν‘– = 1 ν‘– i.e., 𝑨 takes on values 1, … , 𝑙 For each potential value of π‘Ž , we consider a separate Gaussian distribution: π‘Œ|π‘Ž = 𝑨 ∼ π’ͺ 𝜈 푧 , Ξ£ 푧 𝜈 푧 ∈ ℝ ν‘› , Ξ£ 푧 ∈ ℝ ν‘›Γ—ν‘› , Can write the distribution of π‘Œ using marginalization π’ͺ 𝑦; 𝜈 푧 , Ξ£ 푧 π‘ž π‘Œ = βˆ‘ π‘ž π‘Œ π‘Ž = 𝑨 π‘ž(π‘Ž = 𝑨) = βˆ‘ 𝜚 푧 푧 푧 19

  20. Learning mixture models To estimate parameters, suppose first that we can observe both π‘Œ and π‘Ž , i.e., our data set is of the form 𝑦 ν‘– , 𝑨 ν‘– , 𝑗 = 1, … , 𝑛 In this case, we can maximize the log-likelihood of the parameters: ν‘š log π‘ž(𝑦 ν‘– , 𝑨 ν‘– ; 𝜈, Ξ£, 𝜚) β„“ 𝜈, Ξ£, 𝜚 = βˆ‘ ν‘–=1 Without getting into the full details, it hopefully should not be too surprising that the solutions here are given by: ν‘š 𝟐 𝑨 ν‘– = 𝑨 ν‘š 𝟐 𝑨 ν‘– = 𝑨 𝑦 ν‘– βˆ‘ ν‘–=1 βˆ‘ ν‘–=1 𝜈 푧 = 𝜚 푧 = , , ν‘š 𝟐 𝑨 ν‘– = 𝑨 𝑛 βˆ‘ ν‘–=1 ν‘š 𝟐 𝑨 ν‘– = 𝑨 (𝑦 ν‘– βˆ’πœˆ 푧 ) 𝑦 ν‘– βˆ’ 𝜈 푧 푇 βˆ‘ ν‘–=1 Ξ£ 푧 = ν‘š 𝟐 𝑨 ν‘– = 𝑨 βˆ‘ ν‘–=1 20

  21. Latent variables and expectation maximization In the unsupervised setting, 𝑨 ν‘– terms will not be known, these are referred to as hidden or latent random variables This means that to estimate the parameters, we can’t use the function 1 𝑨 ν‘– = 𝑨 anymore Expectation maximization (EM) algorithm (at a high level): replace indicators 1 𝑨 ν‘– = 𝑨 with probability estimates π‘ž 𝑨 ν‘– = 𝑨 𝑦 ν‘– ; 𝜈, Ξ£, 𝜚 When we re-estimate these parameter, probabilities change, so repeat: E (expectation) step: compute π‘ž 𝑨 ν‘– = 𝑨 𝑦 ν‘– ; 𝜈, Ξ£, 𝜚 , βˆ€π‘—, 𝑨 M (maximization) step: re-estimate 𝜈, Ξ£, 𝜚 21

  22. Μ‚ Μ‚ Μ‚ Μ‚ Μ‚ Μ‚ EM for Gaussian mixture models E step: using Bayes’ rule, compute probabilities π‘ž 𝑦 ν‘– 𝑨 ν‘– = 𝑨; 𝜈, Ξ£ π‘ž 𝑨 ν‘– = 𝑨; 𝜚 ν‘– = π‘ž 𝑨 ν‘– = 𝑨 𝑦 ν‘– ; 𝜈, Ξ£, 𝜚 = π‘ž 푧 βˆ‘ 푧 β€² π‘ž 𝑦 ν‘– 𝑨 ν‘– = 𝑨′; 𝜈, Ξ£ π‘ž 𝑨 ν‘– = 𝑨′; 𝜚 π’ͺ 𝑦 ν‘– ; 𝜈 푧 , Ξ£ 푧 𝜚 푧 = βˆ‘ 푧 β€² π’ͺ 𝑦 ν‘– ; 𝜈 푧 β€² , Ξ£ 푧 β€² 𝜚 푧 β€² M step: re-estimate parameters using these probabilities ν‘– 𝑦 ν‘– ν‘– (𝑦 ν‘– βˆ’πœˆ 푧 ) 𝑦 ν‘– βˆ’ 𝜈 푧 ν‘š ν‘– ν‘š ν‘š 푇 βˆ‘ ν‘–=1 π‘ž 푧 βˆ‘ ν‘–=1 π‘ž 푧 βˆ‘ ν‘–=1 π‘ž 푧 , 𝜈 푧 ← , Ξ£ 푧 ← 𝜚 푧 ← ν‘š 𝑛 βˆ‘ ν‘–=1 π‘ž ν‘–,푧 ν‘š ν‘– βˆ‘ ν‘–=1 π‘ž 푧 22

  23. Local optima Like k-means, EM is effectively optimizating a non-convex problem Very real possibility of local optima (seemingly moreso than k-means, in practice) Same heuristics work as for k-means (in fact, common to initialize EM with clusters from k-means) 23

  24. Illustration of EM algorithm 24

  25. Illustration of EM algorithm 25

  26. Illustration of EM algorithm 26

  27. Illustration of EM algorithm 27

  28. Illustration of EM algorithm 28

  29. Possibility of local optima 29

  30. Possibility of local optima 30

  31. Possibility of local optima 31

  32. Possibility of local optima 32

  33. Poll: outliers in mixture of Gaussians Consider the following cartoon dataset: If we fit a mixture of two Gaussians to this data via the EM algorithm, which group of points is likely to contain more β€œoutliers” (points with the lowest π‘ž(𝑦) )? 1. Left group 2. Right group 3. Equal chance of each, depending on initialization 33

  34. EM and k-means As you may have noticed, EM for mixture of Gaussians and k-means seem to be doing very similar things Primary differences: EM is computing β€œdistances” based upon the inverse covariance matrix, allows for β€œsoft” assignments instead of hard assignments 34

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend