compressed sensing and generative models
play

Compressed Sensing and Generative Models Ashish Bora Ajil Jalal - PowerPoint PPT Presentation

Compressed Sensing and Generative Models Ashish Bora Ajil Jalal Eric Price Alex Dimakis UT Austin Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 1 / 33 Talk Outline Using generative


  1. Standard Compressed Sensing Formalism “Compressible” = “sparse” Want to estimate x from y = Ax + η , for A ∈ R m × n . ◮ For this talk: ignore η , so y = Ax . Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 8 / 33

  2. Standard Compressed Sensing Formalism “Compressible” = “sparse” Want to estimate x from y = Ax + η , for A ∈ R m × n . ◮ For this talk: ignore η , so y = Ax . Goal: � x with k -sparse x ′ � x − x ′ � 2 � x − � x � 2 ≤ O (1) · min (1) with high probability. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 8 / 33

  3. Standard Compressed Sensing Formalism “Compressible” = “sparse” Want to estimate x from y = Ax + η , for A ∈ R m × n . ◮ For this talk: ignore η , so y = Ax . Goal: � x with k -sparse x ′ � x − x ′ � 2 � x − � x � 2 ≤ O (1) · min (1) with high probability. ◮ Reconstruction accuracy proportional to model accuracy. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 8 / 33

  4. Standard Compressed Sensing Formalism “Compressible” = “sparse” Want to estimate x from y = Ax + η , for A ∈ R m × n . ◮ For this talk: ignore η , so y = Ax . Goal: � x with k -sparse x ′ � x − x ′ � 2 � x − � x � 2 ≤ O (1) · min (1) with high probability. ◮ Reconstruction accuracy proportional to model accuracy. Theorem [Cand` es-Romberg-Tao 2006] Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 8 / 33

  5. Standard Compressed Sensing Formalism “Compressible” = “sparse” Want to estimate x from y = Ax + η , for A ∈ R m × n . ◮ For this talk: ignore η , so y = Ax . Goal: � x with k -sparse x ′ � x − x ′ � 2 � x − � x � 2 ≤ O (1) · min (1) with high probability. ◮ Reconstruction accuracy proportional to model accuracy. Theorem [Cand` es-Romberg-Tao 2006] ◮ m = Θ( k log( n / k )) suffices for (1). Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 8 / 33

  6. Standard Compressed Sensing Formalism “Compressible” = “sparse” Want to estimate x from y = Ax + η , for A ∈ R m × n . ◮ For this talk: ignore η , so y = Ax . Goal: � x with k -sparse x ′ � x − x ′ � 2 � x − � x � 2 ≤ O (1) · min (1) with high probability. ◮ Reconstruction accuracy proportional to model accuracy. Theorem [Cand` es-Romberg-Tao 2006] ◮ m = Θ( k log( n / k )) suffices for (1). ◮ Such an � x can be found efficiently with, e.g., the LASSO. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 8 / 33

  7. Alternatives to sparsity? MRI images are sparse in the wavelet basis. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 33

  8. Alternatives to sparsity? MRI images are sparse in the wavelet basis. Worldwide, 100 million MRIs taken per year. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 33

  9. Alternatives to sparsity? MRI images are sparse in the wavelet basis. Worldwide, 100 million MRIs taken per year. Want a data-driven model. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 33

  10. Alternatives to sparsity? MRI images are sparse in the wavelet basis. Worldwide, 100 million MRIs taken per year. Want a data-driven model. ◮ Better structural understanding should give fewer measurements. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 33

  11. Alternatives to sparsity? MRI images are sparse in the wavelet basis. Worldwide, 100 million MRIs taken per year. Want a data-driven model. ◮ Better structural understanding should give fewer measurements. Best way to model images in 2019? Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 33

  12. Alternatives to sparsity? MRI images are sparse in the wavelet basis. Worldwide, 100 million MRIs taken per year. Want a data-driven model. ◮ Better structural understanding should give fewer measurements. Best way to model images in 2019? ◮ Deep convolutional neural networks. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 33

  13. Alternatives to sparsity? MRI images are sparse in the wavelet basis. Worldwide, 100 million MRIs taken per year. Want a data-driven model. ◮ Better structural understanding should give fewer measurements. Best way to model images in 2019? ◮ Deep convolutional neural networks. ◮ In particular: generative models . Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 33

  14. Generative Models Random n Image k noise z Karras et al., 2018 Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 33

  15. Generative Models Random n Image k noise z Karras et al., 2018 Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 33

  16. Generative Models Random n Image k noise z Karras et al., 2018 Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 33

  17. Training Generative Models Random n Image k noise z Karras et al., 2018 Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 33

  18. Training Generative Models Random n Image k noise z Karras et al., 2018 Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 33

  19. Training Generative Models Random n Image k noise z Karras et al., 2018 Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 33

  20. Training Generative Models Random n Image k noise z Karras et al., 2018 Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 33

  21. Training Generative Models Random n Image k noise z Karras et al., 2018 Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 33

  22. Training Generative Models Random n Image k noise z Karras et al., 2018 Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 33

  23. Training Generative Models Random n Image k noise z Karras et al., 2018 Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 33

  24. Generative Models Want to model a distribution D of images. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 11 / 33

  25. Generative Models Want to model a distribution D of images. Function G : R k → R n . Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 11 / 33

  26. Generative Models Want to model a distribution D of images. Function G : R k → R n . When z ∼ N (0 , I k ), then ideally G ( z ) ∼ D . Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 11 / 33

  27. Generative Models Want to model a distribution D of images. Function G : R k → R n . When z ∼ N (0 , I k ), then ideally G ( z ) ∼ D . Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]: Astronomy Particle Physics Faces Karras et al., 2018 Schawinski et al., 2017 Paganini et al., 2017 Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 11 / 33

  28. Generative Models Want to model a distribution D of images. Function G : R k → R n . When z ∼ N (0 , I k ), then ideally G ( z ) ∼ D . Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]: Astronomy Particle Physics Faces Karras et al., 2018 Schawinski et al., 2017 Paganini et al., 2017 Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 11 / 33

  29. Generative Models Want to model a distribution D of images. Function G : R k → R n . When z ∼ N (0 , I k ), then ideally G ( z ) ∼ D . Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]: Astronomy Particle Physics Faces Karras et al., 2018 Schawinski et al., 2017 Paganini et al., 2017 Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 11 / 33

  30. Generative Models Want to model a distribution D of images. Function G : R k → R n . When z ∼ N (0 , I k ), then ideally G ( z ) ∼ D . Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]: Astronomy Particle Physics Faces Karras et al., 2018 Schawinski et al., 2017 Paganini et al., 2017 Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 11 / 33

  31. Generative Models Want to model a distribution D of images. Function G : R k → R n . When z ∼ N (0 , I k ), then ideally G ( z ) ∼ D . Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]: Astronomy Particle Physics Faces Karras et al., 2018 Schawinski et al., 2017 Paganini et al., 2017 Variational Auto-Encoders (VAEs) [Kingma & Welling 2013]. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 11 / 33

  32. Generative Models Want to model a distribution D of images. Function G : R k → R n . When z ∼ N (0 , I k ), then ideally G ( z ) ∼ D . Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]: Astronomy Particle Physics Faces Suggestion for compressed sensing Replace “ x is k -sparse” by “ x is in range of G : R k → R n ”. Karras et al., 2018 Schawinski et al., 2017 Paganini et al., 2017 Variational Auto-Encoders (VAEs) [Kingma & Welling 2013]. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 11 / 33

  33. Our Results “Compressible” = “near range( G )” Want to estimate x from y = Ax , for A ∈ R m × n . Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 33

  34. Our Results “Compressible” = “near range( G )” Want to estimate x from y = Ax , for A ∈ R m × n . Goal: � x with k -sparse x ′ � x − x ′ � 2 � x − � x � 2 ≤ O (1) · min (2) Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 33

  35. Our Results “Compressible” = “near range( G )” Want to estimate x from y = Ax , for A ∈ R m × n . Goal: � x with x ′ ∈ range( G ) � x − x ′ � 2 � x − � x � 2 ≤ O (1) · min (2) Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 33

  36. Our Results “Compressible” = “near range( G )” Want to estimate x from y = Ax , for A ∈ R m × n . Goal: � x with x ′ ∈ range( G ) � x − x ′ � 2 � x − � x � 2 ≤ O (1) · min (2) Main Theorem I: m = O ( kd log n ) suffices for (2). Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 33

  37. Our Results “Compressible” = “near range( G )” Want to estimate x from y = Ax , for A ∈ R m × n . Goal: � x with x ′ ∈ range( G ) � x − x ′ � 2 � x − � x � 2 ≤ O (1) · min (2) Main Theorem I: m = O ( kd log n ) suffices for (2). ◮ G is a d -layer ReLU-based neural network. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 33

  38. Our Results “Compressible” = “near range( G )” Want to estimate x from y = Ax , for A ∈ R m × n . Goal: � x with x ′ ∈ range( G ) � x − x ′ � 2 � x − � x � 2 ≤ O (1) · min (2) Main Theorem I: m = O ( kd log n ) suffices for (2). ◮ G is a d -layer ReLU-based neural network. ◮ When A is random Gaussian matrix. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 33

  39. Our Results “Compressible” = “near range( G )” Want to estimate x from y = Ax , for A ∈ R m × n . Goal: � x with x ′ ∈ range( G ) � x − x ′ � 2 � x − � x � 2 ≤ O (1) · min (2) Main Theorem I: m = O ( kd log n ) suffices for (2). ◮ G is a d -layer ReLU-based neural network. ◮ When A is random Gaussian matrix. Main Theorem II: Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 33

  40. Our Results “Compressible” = “near range( G )” Want to estimate x from y = Ax , for A ∈ R m × n . Goal: � x with x ′ ∈ range( G ) � x − x ′ � 2 � x − � x � 2 ≤ O (1) · min (2) Main Theorem I: m = O ( kd log n ) suffices for (2). ◮ G is a d -layer ReLU-based neural network. ◮ When A is random Gaussian matrix. Main Theorem II: ◮ For any Lipschitz G , m = O ( k log L ) suffices. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 33

  41. Our Results “Compressible” = “near range( G )” Want to estimate x from y = Ax , for A ∈ R m × n . Goal: � x with x ′ = G ( z ′ ) , � z ′ � 2 ≤ r � x − x ′ � 2 + δ � x − � x � 2 ≤ O (1) · min (2) Main Theorem I: m = O ( kd log n ) suffices for (2). ◮ G is a d -layer ReLU-based neural network. ◮ When A is random Gaussian matrix. Main Theorem II: ◮ For any Lipschitz G , m = O ( k log rL δ ) suffices. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 33

  42. Our Results “Compressible” = “near range( G )” Want to estimate x from y = Ax , for A ∈ R m × n . Goal: � x with x ′ = G ( z ′ ) , � z ′ � 2 ≤ r � x − x ′ � 2 + δ � x − � x � 2 ≤ O (1) · min (2) Main Theorem I: m = O ( kd log n ) suffices for (2). ◮ G is a d -layer ReLU-based neural network. ◮ When A is random Gaussian matrix. Main Theorem II: ◮ For any Lipschitz G , m = O ( k log rL δ ) suffices. ◮ Morally the same O ( kd log n ) bound. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 33

  43. Our Results (II) “Compressible” = “near range( G )” Want to estimate x from y = Ax , for A ∈ R m × n . Goal: � x with x ′ ∈ range( G ) � x − x ′ � 2 � x − � x � 2 ≤ O (1) · min (3) m = O ( kd log n ) suffices for d -layer G . Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 13 / 33

  44. Our Results (II) “Compressible” = “near range( G )” Want to estimate x from y = Ax , for A ∈ R m × n . Goal: � x with x ′ ∈ range( G ) � x − x ′ � 2 � x − � x � 2 ≤ O (1) · min (3) m = O ( kd log n ) suffices for d -layer G . ◮ Compared to O ( k log n ) for sparsity-based methods. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 13 / 33

  45. Our Results (II) “Compressible” = “near range( G )” Want to estimate x from y = Ax , for A ∈ R m × n . Goal: � x with x ′ ∈ range( G ) � x − x ′ � 2 � x − � x � 2 ≤ O (1) · min (3) m = O ( kd log n ) suffices for d -layer G . ◮ Compared to O ( k log n ) for sparsity-based methods. ◮ k here can be much smaller Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 13 / 33

  46. Our Results (II) “Compressible” = “near range( G )” Want to estimate x from y = Ax , for A ∈ R m × n . Goal: � x with x ′ ∈ range( G ) � x − x ′ � 2 � x − � x � 2 ≤ O (1) · min (3) m = O ( kd log n ) suffices for d -layer G . ◮ Compared to O ( k log n ) for sparsity-based methods. ◮ k here can be much smaller Find � x = G ( � z ) by gradient descent on � y − AG ( � z ) � 2 . Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 13 / 33

  47. Our Results (II) “Compressible” = “near range( G )” Want to estimate x from y = Ax , for A ∈ R m × n . Goal: � x with x ′ ∈ range( G ) � x − x ′ � 2 � x − � x � 2 ≤ O (1) · min (3) m = O ( kd log n ) suffices for d -layer G . ◮ Compared to O ( k log n ) for sparsity-based methods. ◮ k here can be much smaller Find � x = G ( � z ) by gradient descent on � y − AG ( � z ) � 2 . ◮ Just like for training, no proof this converges Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 13 / 33

  48. Our Results (II) “Compressible” = “near range( G )” Want to estimate x from y = Ax , for A ∈ R m × n . Goal: � x with x ′ ∈ range( G ) � x − x ′ � 2 � x − � x � 2 ≤ O (1) · min (3) m = O ( kd log n ) suffices for d -layer G . ◮ Compared to O ( k log n ) for sparsity-based methods. ◮ k here can be much smaller Find � x = G ( � z ) by gradient descent on � y − AG ( � z ) � 2 . ◮ Just like for training, no proof this converges ◮ Approximate solution approximately gives (3) Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 13 / 33

  49. Our Results (II) “Compressible” = “near range( G )” Want to estimate x from y = Ax , for A ∈ R m × n . Goal: � x with x ′ ∈ range( G ) � x − x ′ � 2 � x − � x � 2 ≤ O (1) · min (3) m = O ( kd log n ) suffices for d -layer G . ◮ Compared to O ( k log n ) for sparsity-based methods. ◮ k here can be much smaller Find � x = G ( � z ) by gradient descent on � y − AG ( � z ) � 2 . ◮ Just like for training, no proof this converges ◮ Approximate solution approximately gives (3) ◮ Can check that � � x − x � 2 is small. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 13 / 33

  50. Our Results (II) “Compressible” = “near range( G )” Want to estimate x from y = Ax , for A ∈ R m × n . Goal: � x with x ′ ∈ range( G ) � x − x ′ � 2 � x − � x � 2 ≤ O (1) · min (3) m = O ( kd log n ) suffices for d -layer G . ◮ Compared to O ( k log n ) for sparsity-based methods. ◮ k here can be much smaller Find � x = G ( � z ) by gradient descent on � y − AG ( � z ) � 2 . ◮ Just like for training, no proof this converges ◮ Approximate solution approximately gives (3) ◮ Can check that � � x − x � 2 is small. ◮ In practice, optimization error is negligible. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 13 / 33

  51. Related Work Model-based compressed sensing (Baraniuk-Cevher-Duarte-Hegde ’10) Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 14 / 33

  52. Related Work Model-based compressed sensing (Baraniuk-Cevher-Duarte-Hegde ’10) ◮ k -sparse + more = ⇒ O ( k ) measurements. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 14 / 33

  53. Related Work Model-based compressed sensing (Baraniuk-Cevher-Duarte-Hegde ’10) ◮ k -sparse + more = ⇒ O ( k ) measurements. Projections on manifolds (Baraniuk-Wakin ’09, Eftekhari-Wakin ’15) Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 14 / 33

  54. Related Work Model-based compressed sensing (Baraniuk-Cevher-Duarte-Hegde ’10) ◮ k -sparse + more = ⇒ O ( k ) measurements. Projections on manifolds (Baraniuk-Wakin ’09, Eftekhari-Wakin ’15) ◮ Conditions on manifold for which recovery is possible. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 14 / 33

  55. Related Work Model-based compressed sensing (Baraniuk-Cevher-Duarte-Hegde ’10) ◮ k -sparse + more = ⇒ O ( k ) measurements. Projections on manifolds (Baraniuk-Wakin ’09, Eftekhari-Wakin ’15) ◮ Conditions on manifold for which recovery is possible. Deep network models (Mousavi-Dasarathy-Baraniuk ’17, Chang et al ’17) Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 14 / 33

  56. Related Work Model-based compressed sensing (Baraniuk-Cevher-Duarte-Hegde ’10) ◮ k -sparse + more = ⇒ O ( k ) measurements. Projections on manifolds (Baraniuk-Wakin ’09, Eftekhari-Wakin ’15) ◮ Conditions on manifold for which recovery is possible. Deep network models (Mousavi-Dasarathy-Baraniuk ’17, Chang et al ’17) ◮ Train deep network to encode and/or decode. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 14 / 33

  57. Experimental Results Faces: n = 64 × 64 × 3 = 12288, m = 500 Original Lasso (DCT) (Wavelet) Lasso DCGAN Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 15 / 33

  58. Experimental Results Faces: n = 64 × 64 × 3 = 12288, m = 500 Original Lasso (DCT) (Wavelet) Lasso DCGAN Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 15 / 33

  59. Experimental Results Faces: n = 64 × 64 × 3 = 12288, m = 500 Original Lasso (DCT) (Wavelet) Lasso DCGAN Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 15 / 33

  60. Experimental Results MNIST: n = 28 x 28 = 784, m = 100. Original Lasso VAE Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 15 / 33

  61. Experimental Results Faces MNIST 0.35 0.12 Lasso (DCT) Lasso Reconstruction error (per pixel) 0.30 Reconstruction error (per pixel) Lasso (Wavelet) VAE 0.10 DCGAN 0.25 0.08 0.20 0.06 0.15 0.04 0.10 0.02 0.05 0.00 0.00 0 500 1000 1500 2000 2500 0 100 200 300 400 500 Number of measurements Number of measurements Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 16 / 33

  62. Proof Outline (ReLU-based networks) Show range( G ) lies within union of n dk k -dimensional hyperplane. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 33

  63. Proof Outline (ReLU-based networks) Show range( G ) lies within union of n dk k -dimensional hyperplane. � n � ≤ 2 k log( n / k ) hyperplanes. ◮ Then analogous to proof for sparsity: k Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 33

  64. Proof Outline (ReLU-based networks) Show range( G ) lies within union of n dk k -dimensional hyperplane. � n � ≤ 2 k log( n / k ) hyperplanes. ◮ Then analogous to proof for sparsity: k ◮ So dk log n Gaussian measurements suffice. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 33

  65. Proof Outline (ReLU-based networks) Show range( G ) lies within union of n dk k -dimensional hyperplane. � n � ≤ 2 k log( n / k ) hyperplanes. ◮ Then analogous to proof for sparsity: k ◮ So dk log n Gaussian measurements suffice. ReLU-based network: Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 33

  66. Proof Outline (ReLU-based networks) Show range( G ) lies within union of n dk k -dimensional hyperplane. � n � ≤ 2 k log( n / k ) hyperplanes. ◮ Then analogous to proof for sparsity: k ◮ So dk log n Gaussian measurements suffice. ReLU-based network: ◮ Each layer is z → ReLU( A i z ). Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 33

  67. Proof Outline (ReLU-based networks) Show range( G ) lies within union of n dk k -dimensional hyperplane. � n � ≤ 2 k log( n / k ) hyperplanes. ◮ Then analogous to proof for sparsity: k ◮ So dk log n Gaussian measurements suffice. ReLU-based network: ◮ Each layer is z → ReLU( A i z ). Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 33

  68. Proof Outline (ReLU-based networks) Show range( G ) lies within union of n dk k -dimensional hyperplane. � n � ≤ 2 k log( n / k ) hyperplanes. ◮ Then analogous to proof for sparsity: k ◮ So dk log n Gaussian measurements suffice. ReLU-based network: ◮ Each layer is z → ReLU( A i z ). � y i y i ≥ 0 ◮ ReLU( y ) i = 0 otherwise Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 33

  69. Proof Outline (ReLU-based networks) Show range( G ) lies within union of n dk k -dimensional hyperplane. � n � ≤ 2 k log( n / k ) hyperplanes. ◮ Then analogous to proof for sparsity: k ◮ So dk log n Gaussian measurements suffice. ReLU-based network: ◮ Each layer is z → ReLU( A i z ). � y i y i ≥ 0 ◮ ReLU( y ) i = 0 otherwise Input to layer 1: single k -dimensional hyperplane. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 33

  70. Proof Outline (ReLU-based networks) Show range( G ) lies within union of n dk k -dimensional hyperplane. � n � ≤ 2 k log( n / k ) hyperplanes. ◮ Then analogous to proof for sparsity: k ◮ So dk log n Gaussian measurements suffice. ReLU-based network: ◮ Each layer is z → ReLU( A i z ). � y i y i ≥ 0 ◮ ReLU( y ) i = 0 otherwise Input to layer 1: single k -dimensional hyperplane. Lemma Layer 1’s output lies within a union of n k k -dimensional hyperplanes. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 33

  71. Proof Outline (ReLU-based networks) Show range( G ) lies within union of n dk k -dimensional hyperplane. � n � ≤ 2 k log( n / k ) hyperplanes. ◮ Then analogous to proof for sparsity: k ◮ So dk log n Gaussian measurements suffice. ReLU-based network: ◮ Each layer is z → ReLU( A i z ). � y i y i ≥ 0 ◮ ReLU( y ) i = 0 otherwise Input to layer 1: single k -dimensional hyperplane. Lemma Layer 1’s output lies within a union of n k k -dimensional hyperplanes. Induction: final output lies within n dk k -dimensional hyperplanes. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 33

  72. Proof of Lemma Layer 1’s output lies within a union of n k k -dimensional hyperplanes. z is k -dimensional. Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 18 / 33

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend