compsci 514 algorithms for data science
play

compsci 514: algorithms for data science Cameron Musco University - PowerPoint PPT Presentation

compsci 514: algorithms for data science Cameron Musco University of Massachusetts Amherst. Fall 2019. Lecture 16 0 Finish up stochastic block model. Efficient algorithms for SVD/eigendecomposition. Iterative methods: power method,


  1. compsci 514: algorithms for data science Cameron Musco University of Massachusetts Amherst. Fall 2019. Lecture 16 0

  2. • Finish up stochastic block model. • Efficient algorithms for SVD/eigendecomposition. • Iterative methods: power method, Krylov subspace methods. summary Last Class: This Class: 1 • Spectral clustering and embeddings • Started application to stochastic block model.

  3. summary Last Class: This Class: 1 • Spectral clustering and embeddings • Started application to stochastic block model. • Finish up stochastic block model. • Efficient algorithms for SVD/eigendecomposition. • Iterative methods: power method, Krylov subspace methods.

  4. Stochastic Block Model (Planted Partition Model): Let G n p q be a • Any two nodes in the same group are connected with probability p • Any two nodes in different groups are connected with prob. q • Connections are independent. stochastic block model Goal: Argue the effectiveness of spectral clustering in a natural, if oversimplified, generative model. distribution over graphs on n nodes, split equally into two groups B and C , each with n 2 nodes. (including self-loops). p . 2

  5. stochastic block model Goal: Argue the effectiveness of spectral clustering in a natural, if oversimplified, generative model. distribution over graphs on n nodes, split equally into two groups B (including self-loops). 2 Stochastic Block Model (Planted Partition Model): Let G n ( p , q ) be a and C , each with n / 2 nodes. • Any two nodes in the same group are connected with probability p • Any two nodes in different groups are connected with prob. q < p . • Connections are independent.

  6. expected adjacency spectrum Letting G be a stochastic block model graph drawn from What is the rank of A and how can you see this quickly? How many nonzero eigenvalues does A have? each. Connections are independent with probability p between nodes in the same group, and probability q between nodes not in the same group. 3 G n ( p , q ) and A ∈ R n × n be its adjacency matrix. ( E [ A ]) i , j = p for i , j in same group, ( E [ A ]) i , j = q otherwise. G n ( p , q ) : stochastic block model distribution. B , C : groups with n / 2 nodes

  7. expected adjacency spectrum Letting G be a stochastic block model graph drawn from and how can you see this quickly? How many nonzero eigenvalues does A have? each. Connections are independent with probability p between nodes in the same group, and probability q between nodes not in the same group. 3 G n ( p , q ) and A ∈ R n × n be its adjacency matrix. ( E [ A ]) i , j = p for i , j in same group, ( E [ A ]) i , j = q otherwise. What is the rank of E [ A ] G n ( p , q ) : stochastic block model distribution. B , C : groups with n / 2 nodes

  8. expected adjacency spectrum Letting G be a stochastic block model graph drawn from and how can you see this quickly? How many nonzero have? each. Connections are independent with probability p between nodes in the same group, and probability q between nodes not in the same group. 3 G n ( p , q ) and A ∈ R n × n be its adjacency matrix. ( E [ A ]) i , j = p for i , j in same group, ( E [ A ]) i , j = q otherwise. What is the rank of E [ A ] eigenvalues does E [ A ] G n ( p , q ) : stochastic block model distribution. B , C : groups with n / 2 nodes

  9. If we compute v 2 then we recover the communities B and C ! • v 1 • v 2 B C with eigenvalue B C i B C i expected adjacency spectrum . C . 1 for i B and 1 if i • q n 2 p 2 . 2 q n p 1 1 with eigenvalue 4

  10. If we compute v 2 then we recover the communities B and C ! expected adjacency spectrum 2 . 2 . 4 v 1 = ⃗ 1 with eigenvalue λ 1 = ( p + q ) n • ⃗ v 2 = χ B , C with eigenvalue λ 2 = ( p − q ) n • ⃗ • χ B , C ( i ) = 1 if i ∈ B and χ B , C ( i ) = − 1 for i ∈ C .

  11. expected adjacency spectrum 2 . 2 . 4 v 1 = ⃗ 1 with eigenvalue λ 1 = ( p + q ) n • ⃗ v 2 = χ B , C with eigenvalue λ 2 = ( p − q ) n • ⃗ • χ B , C ( i ) = 1 if i ∈ B and χ B , C ( i ) = − 1 for i ∈ C . If we compute ⃗ v 2 then we recover the communities B and C !

  12. expected laplacian spectrum Letting G be a stochastic block model graph drawn from 5 G n ( p , q ) , A ∈ R n × n be its adjacency matrix and L be its Laplacian, what are the eigenvectors and eigenvalues of E [ L ] ?

  13. expected laplacian spectrum Letting G be a stochastic block model graph drawn from 6 G n ( p , q ) , A ∈ R n × n be its adjacency matrix and L be its Laplacian, what are the eigenvectors and eigenvalues of E [ L ] ?

  14. • Random matrix theory is a very recent and cutting edge • If the random graph G (equivilantly A and L ) were exactly How do we show that a matrix (e.g., A ) is close to its expectation? Matrix concentration inequalities. • Analogous to scalar concentration inequalities like Markovs, expected laplacian spectrum indicator vector for the cut between the communities. equal to its expectation, partitioning using this eigenvector would exactly recover the two communities B and C . Chebyshevs, Bernsteins. subfield of mathematics that is being actively applied in computer science, statistics, and ML. 7 Upshot: The second small eigenvector of E [ L ] is χ B , C – the

  15. • Random matrix theory is a very recent and cutting edge How do we show that a matrix (e.g., A ) is close to its expectation? Matrix concentration inequalities. • Analogous to scalar concentration inequalities like Markovs, expected laplacian spectrum indicator vector for the cut between the communities. equal to its expectation, partitioning using this eigenvector would exactly recover the two communities B and C . Chebyshevs, Bernsteins. subfield of mathematics that is being actively applied in computer science, statistics, and ML. 7 Upshot: The second small eigenvector of E [ L ] is χ B , C – the • If the random graph G (equivilantly A and L ) were exactly

  16. • Random matrix theory is a very recent and cutting edge • Analogous to scalar concentration inequalities like Markovs, expected laplacian spectrum indicator vector for the cut between the communities. equal to its expectation, partitioning using this eigenvector would exactly recover the two communities B and C . expectation? Matrix concentration inequalities. Chebyshevs, Bernsteins. subfield of mathematics that is being actively applied in computer science, statistics, and ML. 7 Upshot: The second small eigenvector of E [ L ] is χ B , C – the • If the random graph G (equivilantly A and L ) were exactly How do we show that a matrix (e.g., A ) is close to its

  17. • Random matrix theory is a very recent and cutting edge expected laplacian spectrum indicator vector for the cut between the communities. equal to its expectation, partitioning using this eigenvector would exactly recover the two communities B and C . expectation? Matrix concentration inequalities. Chebyshevs, Bernsteins. subfield of mathematics that is being actively applied in computer science, statistics, and ML. 7 Upshot: The second small eigenvector of E [ L ] is χ B , C – the • If the random graph G (equivilantly A and L ) were exactly How do we show that a matrix (e.g., A ) is close to its • Analogous to scalar concentration inequalities like Markovs,

  18. expected laplacian spectrum indicator vector for the cut between the communities. equal to its expectation, partitioning using this eigenvector would exactly recover the two communities B and C . expectation? Matrix concentration inequalities. Chebyshevs, Bernsteins. subfield of mathematics that is being actively applied in computer science, statistics, and ML. 7 Upshot: The second small eigenvector of E [ L ] is χ B , C – the • If the random graph G (equivilantly A and L ) were exactly How do we show that a matrix (e.g., A ) is close to its • Analogous to scalar concentration inequalities like Markovs, • Random matrix theory is a very recent and cutting edge

  19. Exercise: Show that X 2 is equal to the largest singular value of X . matrix concentration their difference in spectral norm? A are close. How does this relate to second eigenvectors of A and For the stochastic block model application, we want to show that the of the largest magnitude eigenvalue. A ) show that it is equal to the magnitude For symmetric X (like A 8 high probability , then with n ( ) Matrix Concentration Inequality: If p ≥ O log 4 n ∥ A − E [ A ] ∥ 2 ≤ O ( √ pn ) . where ∥ · ∥ 2 is the matrix spectral norm (operator norm). For any X ∈ R n × d , ∥ X ∥ 2 = max z ∈ R d : ∥ z ∥ 2 = 1 ∥ X z ∥ 2 .

  20. matrix concentration of the largest magnitude eigenvalue. their difference in spectral norm? A are close. How does this relate to second eigenvectors of A and For the stochastic block model application, we want to show that the 8 , then with high probability n ( ) Matrix Concentration Inequality: If p ≥ O log 4 n ∥ A − E [ A ] ∥ 2 ≤ O ( √ pn ) . where ∥ · ∥ 2 is the matrix spectral norm (operator norm). For any X ∈ R n × d , ∥ X ∥ 2 = max z ∈ R d : ∥ z ∥ 2 = 1 ∥ X z ∥ 2 . Exercise: Show that ∥ X ∥ 2 is equal to the largest singular value of X . For symmetric X (like A − E [ A ] ) show that it is equal to the magnitude

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend