blind identification of invertible graph filters with
play

Blind Identification of Invertible Graph Filters with Multiple - PowerPoint PPT Presentation

Blind Identification of Invertible Graph Filters with Multiple Sparse Inputs Chang Ye Dept. of ECE and Goergen Institute for Data Science University of Rochester cye7@ur.rochester.edu http://www.ece.rochester.edu/~cye7/ Co-authors: Rasoul


  1. Blind Identification of Invertible Graph Filters with Multiple Sparse Inputs Chang Ye Dept. of ECE and Goergen Institute for Data Science University of Rochester cye7@ur.rochester.edu http://www.ece.rochester.edu/~cye7/ Co-authors: Rasoul Shafipour and Gonzalo Mateos Acknowledgment: NSF Awards CCF-1750428 and ECCS-1809356 EUSIPCO 2018, Rome, Italy, September 3, 2018 Blind Identification of Invertible Graph Filters with Multiple Sparse Inputs 1

  2. Network Science analytics Online social media Internet Clean energy and grid analy,cs ◮ Network as undirected graph G = ( V , E ): encode pairwise relationships ◮ Desiderata: Process, analyze and learn from network data [Kolaczyk’09] ⇒ Study graph signals, data associated with N nodes in V ◮ Ex: Opinion profile, buffer congestion levels, neural activity, epidemic Blind Identification of Invertible Graph Filters with Multiple Sparse Inputs 2

  3. Graph signal processing fundamentals ◮ Graph signals mappings x : V → R , represented as vectors x ∈ R N ⇒ As.: Signal properties related to topology of G ◮ To process graph signals ⇒ Graph-shift operator S ∈ R N × N ⇒ Local S ij = 0 for i � = j and ( i , j ) / ∈ E ⇒ Ex: A or L = D − A ⇒ Spectrum of symmetric S = VΛV T ◮ Graph Fourier Transform (GFT) for signals: ˜ x = V T x ◮ Graph filters H : R N → R N are maps between graph signals ⇒ Polynomial in S with coefficients h ∈ R L ⇒ H := � L − 1 l =0 h l S l ⇒ Orthogonal frequency operator: H = V diag(˜ h ) V T ⇒ Freq. response (GFT for filters): ˜ h = Ψh and [ Ψ ] k , l = λ k l − 1 Blind Identification of Invertible Graph Filters with Multiple Sparse Inputs 3

  4. Diffusion processes as graph filter outputs ◮ Q: Upon observing a graph signal y , how was this signal generated? ◮ Postulate y is the response of linear diffusion to a sparse input x ∞ ∞ � � β l S l x y = α 0 ( I − α l S ) x = l =1 l =0 ⇒ Common generative model, e.g., heat diffusion, consensus ◮ Cayley-Hamilton asserts we can write diffusion as ( L ≤ N ) � L − 1 � � h l S l y = x := Hx l =0 ◮ Model: Observed network process as output of a graph filter ⇒ View few elements in supp( x ) =: { i : x i � = 0 } as sources Blind Identification of Invertible Graph Filters with Multiple Sparse Inputs 4

  5. Motivation and problem statement ◮ Ex: Global opinion/belief profile formed by spreading a rumor ⇒ What was the rumor? Who started it? ⇒ How do people weigh in peers’ opinions to form their own? x y Graph Filter Unobserved Observed ◮ Problem: Blind identification of graph filters with sparse inputs ◮ Q: Given S , can we find sparse x and the filter coeffs. h from y = Hx ? ⇒ Extends classical blind deconvolution to graphs ⇒ Localization of sources that diffuse on the network Blind Identification of Invertible Graph Filters with Multiple Sparse Inputs 5

  6. Work in context ◮ Super-resolution of point sources via convex programming ◮ Signals on structured domains (e.g.,time series) [Fernandez-Granda’15] ◮ Known diffusion model (low-pass point-spread function) ◮ Source localization on graphs ◮ Maximum-likelihood estimator optimal for trees [Pinto et al’12] ◮ Scalable under restrictive dependency assumptions [Feizi el al’16] ◮ Non-convex estimators of sparse sources [Pena et al’16], [Hu et al’16] ◮ Blind identification of graph filters [Segarra et al’17] ◮ Matrix lifting can hinder applicability to large graphs ◮ Our contribution: mild requirement of graph filter invertibility ⇒ Convex formulation amenable to efficient solvers ⇒ Multi-signal case with arbitrary supports Blind Identification of Invertible Graph Filters with Multiple Sparse Inputs 6

  7. Blind graph filter identification ◮ Suppose we observe P output signals Y = [ y 1 , ..., y P ] ∈ R N × P ◮ Leverage frequency response of graph filters Y = HX ⇒ Y = V diag( Ψh ) V T X ⇒ Y is a bilinear function of the unknowns h and X ◮ Ill-posed problem ⇒ L + NP unknowns and NP observations ⇒ As.: X has S -sparse columns i.e., � X � 0 := | supp( X ) | ≤ PS ◮ Blind graph filter identification ⇒ Non-convex feasibility problem � � V T X , � X � 0 ≤ PS find { h , X } , s. to Y = V diag Ψh ⇒ Identifiability for Bernoulli-Gaussian model on X [Li et al’17] Blind Identification of Invertible Graph Filters with Multiple Sparse Inputs 7

  8. Permutation ambiguities ◮ Beyond scaling, permutation ambiguities can arise with unweighted G 5 2 3 6 4 7 1 ◮ Let { X 0 , ˜ h 0 } be a solution, i.e., Y = V diag(˜ h 0 ) V T X 0 ⇒ Define unit-norm u ( i , j ) ∈ R N , with u ( i , j ) = − u ( i , j ) 1 = √ i j 2 ◮ If v k = u ( i , j ) , then ∃ { X 1 , ˜ h 1 } such that Y = V diag(˜ h 1 ) V T X 1 h 1 := diag( p )˜ ˜ X 1 := PX 0 , h 0 P := I − 2 u ( i , j ) ( u ( i , j ) ) T = V diag( p ) V T ⇒ Compare with cyclic-shift ambiguity for discrete-time signals Blind Identification of Invertible Graph Filters with Multiple Sparse Inputs 8

  9. Inverse filter and convex relaxation ◮ Inverse filter G = H − 1 is also a graph filter on G [Sandryhaila-Moura’13] ⇒ Requires ˜ h i = � L − 1 l =0 h l λ l i � = 0, for all i = 1 , ..., N ⇒ Inverse-filter coefficients g ∈ R N , frequency response ˜ g = Ψg ◮ Recast as linear inverse problem [Wang-Chi’16] g ) V T Y , X � = 0 min g , X } � X � 0 , s. to X = V diag(˜ { ˜ ◮ Still NP hard. Relax! and minimize convex � X � 1 1 T ˜ ˆ � ( Y T V ⊙ V )˜ ˜ g = argmin g � 1 , s. to g = 1 ˜ g ⇒ Constraint fixes the scale and avoids all-zero solution ⇒ ℓ 1 -synthesis problem, efficient solvers available Blind Identification of Invertible Graph Filters with Multiple Sparse Inputs 9

  10. Recovery guarantee for ℓ 1 relaxation ◮ Let { X 0 , ˜ g 0 ) V T Y g 0 } be the solution, i.e., X 0 = V diag(˜ ⇒ I indexes the support of vec ( X 0 ), complement is I c ◮ Define Z := Y T V ⊙ V ∈ R NP × N ⇒ Z S is the submatrix of Z with rows indexed by S ⊂ { 1 , ..., NP } . Proposition: ˆ g = ˜ ˜ g 0 if the two following conditions are satisfied 1) rank( Z I c ) = N − 1; and 2) There exists f ∈ R NP such that Z T f = γ 1 , for some γ � = 0 and f I = sign( Z I ˜ g 0 ) and � f I c � ∞ < 1 ◮ Cond. 1) ensures uniqueness of solution ˆ ˜ g ◮ Cond. 2) guarantees existence of a dual certificate f for ℓ 0 optimality Blind Identification of Invertible Graph Filters with Multiple Sparse Inputs 10

  11. Simulation setup ◮ Consider undirected graphs with S = D − 1 2 AD − 1 2 ⇒ Erd˝ os-R´ enyi (ER) graphs with N = 50 and edge prob. p = 0 . 3 ⇒ Structural brain network with N = 66 [Hagmann et al’08] ◮ X 0 adheres to a Bernoulli-Gaussian model. Vary P and S ◮ Filter h 0 = ( e 1 + α b ) / � e 1 + α b � 1 as in [Wang-Chi’16] ⇒ e 1 = [1 , 0 , ..., 0] T ∈ R L and b ∼ N ( 0 , I ) ⇒ Recovery performance increases while α ≥ 0 decreases ◮ Observation matrix → Y = V diag( Ψh 0 ) V T X 0 ◮ Figure of merit: Relative recovery error e X = � ˆ X − X 0 � / � X 0 � ⇒ Successful recovery e X < 0 . 01. Show rates over 20 realizations Blind Identification of Invertible Graph Filters with Multiple Sparse Inputs 11

  12. Recovery performance ◮ Successful recovery over most of the ( S , P ) plane ⇒ Using multiple signals aids recovery ⇒ Performance improves with smaller α ◮ Brain graph ( α = 0 . 5). Proposed (left) and [Segarra et al’17] (right) ⇒ Performance of matrix lifting approach degrades faster with L Blind Identification of Invertible Graph Filters with Multiple Sparse Inputs 12

  13. Concluding summary ◮ Blind identification of graph filters with multiple sparse inputs ⇒ Extends blind deconvolution of space/time signals to graphs ⇒ Key: model diffusion process as output of graph filter ◮ Invertible graph filter assumption ⇒ From a bilinear to a linear inverse problem ⇒ Devoid of matrix lifting → Scales better to large graphs ⇒ Encouraging performance for random and real-world graphs ◮ Ongoing work ⇒ Exact recovery under the Bernoulli-Gaussian model ⇒ Stable recovery from noisy and sampled observations ◮ Envisioned application domains (a) Localize sources of epileptic seizure (b) Event-driven information cascades and “fake-news” detection (c) Trace “patient zero” for an epidemic outbreak Blind Identification of Invertible Graph Filters with Multiple Sparse Inputs 13

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend