random sampling of bandlimited signals on graphs
play

Random Sampling of Bandlimited Signals on Graphs Pierre - PowerPoint PPT Presentation

Random Sampling of Bandlimited Signals on Graphs Pierre Vandergheynst cole Polytechnique Fdrale de Lausanne (EPFL) School of Engineering & School of Computer and Communication Sciences Joint work with Gilles Puy (INRIA), Nicolas


  1. Random Sampling of Bandlimited Signals on Graphs Pierre Vandergheynst École Polytechnique Fédérale de Lausanne (EPFL) School of Engineering & School of Computer and Communication Sciences Joint work with Gilles Puy (INRIA), Nicolas Tremblay (INRIA) and Rémi Gribonval (INRIA) NIPS2015 Workshop Multiresolution Methods for Large Scale Learning 1

  2. Motivation Energy Networks Social Networks Transportation Networks Biological Networks Point Clouds 2

  3. Goal Given partially observed information at the nodes of a graph ? Can we robustly and efficiently infer missing information ? What signal model ? How many observations ? Influence of the structure of the graph ? 3

  4. Notations weighted, undirected G = {V , E , W } V is the set of n nodes E is the set of edges W ∈ R n × n is the weighted adjacency matrix L ∈ R n × n combinatorial graph Laplacian L := D − W normalised Laplacian L := I − D − 1 / 2 WD − 1 / 2 diagonal degree matrix D has entries d i := P i 6 = j W ij 4

  5. Notations L is real, symmetric PSD Graph Fourier Matrix orthonormal eigenvectors U ∈ R n × n non-negative eigenvalues λ 1 6 λ 2 6 . . . , λ n L = U Λ U | 5

  6. Notations L is real, symmetric PSD Graph Fourier Matrix orthonormal eigenvectors U ∈ R n × n non-negative eigenvalues λ 1 6 λ 2 6 . . . , λ n L = U Λ U | k -bandlimited signals x ∈ R n Fourier coefficients x = U | x ˆ x k ∈ R k x k x = U k ˆ ˆ U k := ( u 1 , . . . , u k ) ∈ R n × k first k eigenvectors only 5

  7. Sampling Model n X k p k 1 = p i = 1 p ∈ R n p i > 0 i =1 P := diag( p ) ∈ R n × n 6

  8. Sampling Model n X k p k 1 = p i = 1 p ∈ R n p i > 0 i =1 P := diag( p ) ∈ R n × n Draw independently m samples (random sampling) P ( ω j = i ) = p i , ∀ j ∈ { 1 , . . . , m } and ∀ i ∈ { 1 , . . . , n } 6

  9. Sampling Model n X k p k 1 = p i = 1 p ∈ R n p i > 0 i =1 P := diag( p ) ∈ R n × n Draw independently m samples (random sampling) P ( ω j = i ) = p i , ∀ j ∈ { 1 , . . . , m } and ∀ i ∈ { 1 , . . . , n } ∀ j ∈ { 1 , . . . , m } y j := x ω j , y = M x 6

  10. Sampling Model k U | = k U | k δ i k 2 k δ i k 2 = k U | k δ i k 2 k U | δ i k 2 k δ i k 2 How much a perfect impulse can be concentrated on first k eigenvectors Carries interesting information about the graph 7

  11. Sampling Model k U | = k U | k δ i k 2 k δ i k 2 = k U | k δ i k 2 k U | δ i k 2 k δ i k 2 How much a perfect impulse can be concentrated on first k eigenvectors Carries interesting information about the graph Ideally: p i large wherever k U | k δ i k 2 is large 7

  12. Sampling Model k U | = k U | k δ i k 2 k δ i k 2 = k U | k δ i k 2 k U | δ i k 2 k δ i k 2 How much a perfect impulse can be concentrated on first k eigenvectors Carries interesting information about the graph Ideally: p i large wherever k U | k δ i k 2 is large Graph Coherence n o p − 1 / 2 ν k k U | k δ i k 2 p := max i 1 6 i 6 n √ ν k Rem: k p > 7

  13. Stable Embedding Theorem 1 (Restricted isometry property) . Let M be a random subsampling matrix with the sampling distribution p . For any � , ✏ 2 (0 , 1) , with probability at least 1 � ✏ , 2 6 1 2 � � (1 � � ) k x k 2 2 6 (1 + � ) k x k 2 � MP − 1 / 2 x (1) � � 2 m � for all x 2 span( U k ) provided that m > 3 ✓ 2 k ◆ p ) 2 log � 2 ( ⌫ k (2) . ✏ 8

  14. Stable Embedding Theorem 1 (Restricted isometry property) . Let M be a random subsampling matrix with the sampling distribution p . For any � , ✏ 2 (0 , 1) , with probability at least 1 � ✏ , 2 6 1 2 � � (1 � � ) k x k 2 2 6 (1 + � ) k x k 2 � MP − 1 / 2 x (1) � � 2 m � for all x 2 span( U k ) provided that m > 3 ✓ 2 k ◆ p ) 2 log � 2 ( ⌫ k (2) . ✏ MP − 1 / 2 x = P − 1 / 2 Only need M , re-weighting offline M x Ω 8

  15. Stable Embedding Theorem 1 (Restricted isometry property) . Let M be a random subsampling matrix with the sampling distribution p . For any � , ✏ 2 (0 , 1) , with probability at least 1 � ✏ , 2 6 1 2 � � (1 � � ) k x k 2 2 6 (1 + � ) k x k 2 � MP − 1 / 2 x (1) � � 2 m � for all x 2 span( U k ) provided that m > 3 ✓ 2 k ◆ p ) 2 log � 2 ( ⌫ k (2) . ✏ MP − 1 / 2 x = P − 1 / 2 Only need M , re-weighting offline M x Ω p ) 2 > k ( ν k Need to sample at least k nodes 8

  16. Stable Embedding Theorem 1 (Restricted isometry property) . Let M be a random subsampling matrix with the sampling distribution p . For any � , ✏ 2 (0 , 1) , with probability at least 1 � ✏ , 2 6 1 2 � � (1 � � ) k x k 2 2 6 (1 + � ) k x k 2 � MP − 1 / 2 x (1) � � 2 m � for all x 2 span( U k ) provided that m > 3 ✓ 2 k ◆ p ) 2 log � 2 ( ⌫ k (2) . ✏ MP − 1 / 2 x = P − 1 / 2 Only need M , re-weighting offline M x Ω p ) 2 > k ( ν k Need to sample at least k nodes Proof similar to CS in bounded ONB but simpler since model is a subspace (not a union) 8

  17. Stable Embedding p ) 2 > k ( ν k Need to sample at least k nodes 9

  18. Stable Embedding p ) 2 > k ( ν k Need to sample at least k nodes Can we reduce to optimal amount ? 9

  19. Stable Embedding p ) 2 > k ( ν k Need to sample at least k nodes Can we reduce to optimal amount ? k δ i k 2 i := k U | Variable Density Sampling 2 p ∗ , i = 1 , . . . , n k p ) 2 = k is such that: and depends on structure of graph ( ν k 9

  20. Stable Embedding p ) 2 > k ( ν k Need to sample at least k nodes Can we reduce to optimal amount ? k δ i k 2 i := k U | Variable Density Sampling 2 p ∗ , i = 1 , . . . , n k p ) 2 = k is such that: and depends on structure of graph ( ν k Corollary 1. Let M be a random subsampling matrix constructed with the sam- pling distribution p ∗ . For any � , ✏ 2 (0 , 1) , with probability at least 1 � ✏ , 2 6 1 2 � � (1 � � ) k x k 2 2 6 (1 + � ) k x k 2 � MP − 1 / 2 x � � 2 m � for all x 2 span( U k ) provided that m > 3 ✓ 2 k ◆ 9 � 2 k log . ✏

  21. Recovery Procedures y ∈ R m y = M x + n x ∈ span( U k ) stable embedding 10

  22. Recovery Procedures y ∈ R m y = M x + n x ∈ span( U k ) stable embedding Standard Decoder � � � P − 1 / 2 min ( M z − y ) � � Ω � z ∈ span( U k ) 2 10

  23. Recovery Procedures y ∈ R m y = M x + n x ∈ span( U k ) stable embedding Standard Decoder � � � P − 1 / 2 min ( M z − y ) � � Ω � z ∈ span( U k ) 2 need projector 10

  24. Recovery Procedures y ∈ R m y = M x + n x ∈ span( U k ) stable embedding Standard Decoder � � � P − 1 / 2 min ( M z − y ) � � Ω � z ∈ span( U k ) 2 re-weighting for RIP need projector 10

  25. Recovery Procedures y = M x + n y ∈ R m x ∈ span( U k ) stable embedding 11

  26. Recovery Procedures y = M x + n y ∈ R m x ∈ span( U k ) stable embedding Efficient Decoder: 2 � � � P − 1 / 2 min ( M z − y ) 2 + γ z | g ( L ) z � � Ω � z ∈ R n 11

  27. Recovery Procedures y = M x + n y ∈ R m x ∈ span( U k ) stable embedding Efficient Decoder: 2 � � � P − 1 / 2 min ( M z − y ) 2 + γ z | g ( L ) z � � Ω � z ∈ R n soft constrain on frequencies efficient implementation 11

  28. Analysis of Standard Decoder Standard Decoder: � � � P − 1 / 2 min ( M z − y ) � � Ω � z ∈ span( U k ) 2 12

  29. Analysis of Standard Decoder Standard Decoder: � � � P − 1 / 2 min ( M z − y ) � � Ω � z ∈ span( U k ) 2 Theorem 1. Let Ω be a set of m indices selected independently from { 1 , . . . , n } with sampling distribution p 2 R n , and M the associated sampling matrix. Let � 2 k p ) 2 log � 2 ( ⌫ k 3 � ✏ , � 2 (0 , 1) and m > . With probability at least 1 � ✏ , the ✏ following holds for all x 2 span( U k ) and all n 2 R m . i) Let x ∗ be the solution of Standard Decoder with y = M x + n . Then, 2 k x ∗ � x k 2 6 � � � P − 1 / 2 (1) � � n 2 . Ω p � m (1 � � ) ii) There exist particular vectors n 0 2 R m such that the solution x ∗ of Stan- dard Decoder with y = M x + n 0 satisfies 1 � � k x ∗ � x k 2 > � P − 1 / 2 (2) � � n 0 2 . Ω p � m (1 + � ) 12

  30. Analysis of Standard Decoder Standard Decoder: � � � P − 1 / 2 min ( M z − y ) � � Ω � z ∈ span( U k ) 2 Theorem 1. Let Ω be a set of m indices selected independently from { 1 , . . . , n } with sampling distribution p 2 R n , and M the associated sampling matrix. Let � 2 k p ) 2 log � 2 ( ⌫ k 3 � ✏ , � 2 (0 , 1) and m > . With probability at least 1 � ✏ , the ✏ following holds for all x 2 span( U k ) and all n 2 R m . i) Let x ∗ be the solution of Standard Decoder with y = M x + n . Then, 2 k x ∗ � x k 2 6 � � � P − 1 / 2 (1) � � n 2 . Ω p � m (1 � � ) Exact recovery when noiseless ii) There exist particular vectors n 0 2 R m such that the solution x ∗ of Stan- dard Decoder with y = M x + n 0 satisfies 1 � � k x ∗ � x k 2 > � P − 1 / 2 (2) � � n 0 2 . Ω p � m (1 + � ) 12

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend