the convex geometry of blind deconvolution
play

The Convex Geometry of Blind Deconvolution Dominik Stger Technische - PowerPoint PPT Presentation

The Convex Geometry of Blind Deconvolution Dominik Stger Technische Universitt Mnchen Department of Mathematics July 12, 2019 Joint work Felix Krahmer (TUM), Funded by the DFG in the context of SPP 1798 CoSIP Blind deconvolution in


  1. The Convex Geometry of Blind Deconvolution Dominik Stöger Technische Universität München Department of Mathematics July 12, 2019 Joint work Felix Krahmer (TUM), Funded by the DFG in the context of SPP 1798 CoSIP

  2. Blind deconvolution in imaging • Blind deconvolution ubiquituous in many applications: − Imaging: x signal, y blur • (Circular) convolution of w , x ∈ C L : ( w ∗ x ) k := ∑ L ℓ = 1 w k x ( ℓ − k ) mod L . Dominik Stöger (TUM) | AIP 2019 Grenoble | July 12, 2019 2

  3. Blind deconvolution in wireless communications • Task: deliver message m ∈ C N via unknown channel. Proposed approach: introduce redundancy before transmission. • Linear encoding: • Introduced by Ahmed, Recht, x = Cm with C ∈ C L × N Romberg (IEEE IT ’14) the signal x is transmitted • Channel model: only most direct paths are active w = Bh , where B ∈ C L × K • Received signal: e noise y = w ∗ x + e ∈ C L Goal: recover m from y Dominik Stöger (TUM) | AIP 2019 Grenoble | July 12, 2019 3

  4. Lifting • Observation: w ∗ x = Bh ∗ Cm is bilinear in h and m ⇒ There is a unique linear map A : C K × N → C L such that Bh ∗ Cm = A ( hm ∗ ) for arbitrary h and m Dominik Stöger (TUM) | AIP 2019 Grenoble | July 12, 2019 4

  5. Lifting • Observation: w ∗ x = Bh ∗ Cm is bilinear in h and m ⇒ There is a unique linear map A : C K × N → C L such that Bh ∗ Cm = A ( hm ∗ ) for arbitrary h and m • Thus, the rank 1 matrix X 0 = hm ∗ satisfies y = A ( X 0 )+ e • Finding X 0 is a low rank matrix recovery problem Dominik Stöger (TUM) | AIP 2019 Grenoble | July 12, 2019 4

  6. Lifting • Observation: w ∗ x = Bh ∗ Cm is bilinear in h and m ⇒ There is a unique linear map A : C K × N → C L such that Bh ∗ Cm = A ( hm ∗ ) for arbitrary h and m • Thus, the rank 1 matrix X 0 = hm ∗ satisfies y = A ( X 0 )+ e • Finding X 0 is a low rank matrix recovery problem • Ideally find argmin rank X subject to �A ( X ) − y � 2 ≤ η • Such problems are NP-hard in general → try convex relaxation Dominik Stöger (TUM) | AIP 2019 Grenoble | July 12, 2019 4

  7. A convex approach SDP relaxation (Ahmed, Recht, Romberg ’14) Solve the semidefinite program (SDP) � X = argmin � X � ∗ subject to �A ( X ) − y � 2 ≤ η . (SDP) rank ( X ) The nuclear norm � X � ∗ := ∑ σ j ( X ) is the sum of all singular values. j = 1 Dominik Stöger (TUM) | AIP 2019 Grenoble | July 12, 2019 5

  8. A convex approach SDP relaxation (Ahmed, Recht, Romberg ’14) Solve the semidefinite program (SDP) � X = argmin � X � ∗ subject to �A ( X ) − y � 2 ≤ η . (SDP) rank ( X ) The nuclear norm � X � ∗ := ∑ σ j ( X ) is the sum of all singular values. j = 1 Model assumptions: • y = Bh ∗ C ¯ m + e • Adversarial noise: � e � 2 ≤ η • C ∈ C L × N has i.i.d. standard Gaussian entries • B ∈ C L × K satisfies B ∗ B = Id and is such that FB (for F the DFT) has rows of equal norm. Dominik Stöger (TUM) | AIP 2019 Grenoble | July 12, 2019 5

  9. Recovery guarantees Theorem (Ahmed, Recht, Romberg ’14) Assume � � K + N µ 2 L log 3 L ≥ C . h Then with high probability every minimizer � X of (SDP) satisfies √ � � X − hm ∗ � F � K + N η . • µ h coherence parameter (typically small) Dominik Stöger (TUM) | AIP 2019 Grenoble | July 12, 2019 6

  10. Recovery guarantees Theorem (Ahmed, Recht, Romberg ’14) Assume � � K + N µ 2 L log 3 L ≥ C . h Then with high probability every minimizer � X of (SDP) satisfies √ � � X − hm ∗ � F � K + N η . • µ h coherence parameter (typically small) • Consequences: − No noise , i.e., η = 0: → Exact recovery with a near optimal-amount of measurements Dominik Stöger (TUM) | AIP 2019 Grenoble | July 12, 2019 6

  11. Recovery guarantees Theorem (Ahmed, Recht, Romberg ’14) Assume � � K + N µ 2 L log 3 L ≥ C . h Then with high probability every minimizer � X of (SDP) satisfies √ � � X − hm ∗ � F � K + N η . • µ h coherence parameter (typically small) • Consequences: − No noise , i.e., η = 0: → Exact recovery with a near optimal-amount of measurements − Noisy scenario , i.e., η > 0: √ → dimension factor K + N appears in the noise Does not explain empirical success of ( SDP ) Dominik Stöger (TUM) | AIP 2019 Grenoble | July 12, 2019 6

  12. Noise robustness in low-rank matrix recovery • Gaussian measurement matrices (implies RIP) � • phase retrieval � • blind deconvolution (this presentation) ? • matrix completion ? • Robust PCA ? • many more... ? Dominik Stöger (TUM) | AIP 2019 Grenoble | July 12, 2019 7

  13. Noise robustness in low-rank matrix recovery • Gaussian measurement matrices (implies RIP) � • phase retrieval � • blind deconvolution (this presentation) ? • matrix completion ? • Robust PCA ? • many more... ? Despite the popularity of convex relaxations for low-rank matrix recovery in the literature, their noise robustness is not well-understood . Dominik Stöger (TUM) | AIP 2019 Grenoble | July 12, 2019 7

  14. What is the problem? • Proof technique for these models: − Idea: Show existence of (approximate) dual certificate w.h.p. − Golfing scheme originally developed by D. Gross. D. Gross Dominik Stöger (TUM) | AIP 2019 Grenoble | July 12, 2019 8

  15. What is the problem? • Proof technique for these models: − Idea: Show existence of (approximate) dual certificate w.h.p. − Golfing scheme originally developed by D. Gross. D. Gross • Works well in the noiseless case, where X 0 is expected to be the minimizer • Problem: In noisy models we do not know the minimizer Dominik Stöger (TUM) | AIP 2019 Grenoble | July 12, 2019 8

  16. Are the dimension factors necessary? Recall: We are interested in the scenario L ≪ KN and we optimize � X = argmin � X � ∗ subject to �A ( X ) − y � 2 ≤ η . (SDP) Dominik Stöger (TUM) | AIP 2019 Grenoble | July 12, 2019 9

  17. Are the dimension factors necessary? Recall: We are interested in the scenario L ≪ KN and we optimize � X = argmin � X � ∗ subject to �A ( X ) − y � 2 ≤ η . (SDP) Theorem (Krahmer, DS ’19) There exists an admissible B such that: With high probability there is τ 0 > 0 such that for all τ ≤ τ 0 there exists an adversarial noise vector e ∈ C L with � e � 2 ≤ τ that admits an alternative solution � X with the following properties. � � • � � X is feasible, i.e., �A − y � 2 = τ X X is preferred to hm ∗ by (SDP) i.e., � � • � X � ∗ ≤ � hm ∗ � ∗ , but • � X is far from the true solution in Frobenius norm, i.e., � X − hm ∗ � F ≥ τ KN � � L . C 3 Dominik Stöger (TUM) | AIP 2019 Grenoble | July 12, 2019 9

  18. What does this mean? • Assume K = N and L ≈ CK up to log -factors � √ KN � � X − hm ∗ � F � τ ⇒ L ≈ τ K + N . up to log -factors √ → The factor K + N is not a pure proof artifact. • Caution: � X might not be the minimizer of (SDP)! • Analogous result can be shown for matrix completion. Dominik Stöger (TUM) | AIP 2019 Grenoble | July 12, 2019 10

  19. Ideas of the analysis I X 0 + ker A X 0 nuclear norm ball descent cone • Crucial geometric object: Descent cone for X 0 ∈ C K × N � � Z ∈ C K × N : � X 0 + ε Z � ∗ ≤ � X 0 � ∗ for some small ε > 0 K ∗ ( X 0 ) = Dominik Stöger (TUM) | AIP 2019 Grenoble | July 12, 2019 11

  20. Ideas of the analysis II • Minimum conic singular value : �A ( Z ) � 2 λ min ( A , K ∗ ( X 0 )) := min � Z � F Z ∈K ∗ ( X 0 ) • Noiseless scenario, i.e., η = 0 : Exact recovery ⇐ ⇒ λ min ( A , K ∗ ( X 0 )) > 0 • Noisy scenario: Conic singular value controls stability [Chandrasekaran et al. ’12] : 2 η � � X − X 0 � F ≤ λ min ( A , K ∗ ( X 0 )) (As A is Gaussian, λ min ( A , K ∗ ( hm ∗ )) ≍ 1 w.h.p., whenever L � K + N ) Dominik Stöger (TUM) | AIP 2019 Grenoble | July 12, 2019 12

  21. Ideas of the analysis III Lemma (Krahmer, DS ’19) There exists B ∈ C L × K satisfying B ∗ B = Id K and µ 2 max = 1 , whose corresponding measurement operator A satisfies the following: Let m ∈ C N \{ 0 } and let h ∈ C K \{ 0 } be incoherent. Then with high probability it holds that � L λ min ( A , K ∗ ( hm ∗ )) ≤ C 3 KN . • Lemma can be used to prove the previous theorem. • (Analogous result holds for matrix completion.) Dominik Stöger (TUM) | AIP 2019 Grenoble | July 12, 2019 13

  22. All hope is lost??? Dominik Stöger (TUM) | AIP 2019 Grenoble | July 12, 2019 14

  23. Recovery for high noise levels Theorem (Krahmer, DS ’19) Let α > 0 . Assume that µ 2 α 2 ( K + N ) log 2 L . L ≥ C 1 Then with high probability the following statement holds for all h ∈ S K − 1 with µ h ≤ µ , all m ∈ S N − 1 , all τ > 0 , and all e ∈ C L with � e � 2 ≤ τ : Any minimizer � X of (SDP) satisfies X − hm ∗ � F ≤ C 3 µ 2 / 3 log 2 / 3 L � � max { τ ; α } . α 2 / 3 → Near-optimal recovery guarantees for high noise-levels. Dominik Stöger (TUM) | AIP 2019 Grenoble | July 12, 2019 15

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend