Compressed Sensing and Generative Models
Ashish Bora Ajil Jalal Eric Price Alex Dimakis
UT Austin
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 1 / 33
Compressed Sensing and Generative Models Ashish Bora Ajil Jalal - - PowerPoint PPT Presentation
Compressed Sensing and Generative Models Ashish Bora Ajil Jalal Eric Price Alex Dimakis UT Austin Ashish Bora, Ajil Jalal, Eric Price , Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 1 / 33 Talk Outline Using generative
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 1 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 2 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 3 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 4 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 4 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 4 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 4 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 5 / 33
◮ Naively: m ≥ n or else underdetermined Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 5 / 33
◮ Naively: m ≥ n or else underdetermined: multiple x possible. Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 5 / 33
◮ Naively: m ≥ n or else underdetermined: multiple x possible. ◮ But most x aren’t plausible.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 5 / 33
◮ Naively: m ≥ n or else underdetermined: multiple x possible. ◮ But most x aren’t plausible.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 5 / 33
◮ Naively: m ≥ n or else underdetermined: multiple x possible. ◮ But most x aren’t plausible.
◮ This is why compression is possible. Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 5 / 33
◮ Naively: m ≥ n or else underdetermined: multiple x possible. ◮ But most x aren’t plausible.
◮ This is why compression is possible.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 5 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 6 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 6 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 6 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 7 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 7 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 7 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 7 / 33
◮ Intractible to compute. Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 7 / 33
◮ Intractible to compute.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 7 / 33
◮ Intractible to compute.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 7 / 33
◮ Intractible to compute.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 7 / 33
◮ Intractible to compute.
◮ Sparsity + other constraints (“structured sparsity”) Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 7 / 33
◮ Intractible to compute.
◮ Sparsity + other constraints (“structured sparsity”)
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 7 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 8 / 33
◮ For this talk: ignore η, so y = Ax. Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 8 / 33
◮ For this talk: ignore η, so y = Ax.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 8 / 33
◮ For this talk: ignore η, so y = Ax.
◮ Reconstruction accuracy proportional to model accuracy. Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 8 / 33
◮ For this talk: ignore η, so y = Ax.
◮ Reconstruction accuracy proportional to model accuracy.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 8 / 33
◮ For this talk: ignore η, so y = Ax.
◮ Reconstruction accuracy proportional to model accuracy.
◮ m = Θ(k log(n/k)) suffices for (1). Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 8 / 33
◮ For this talk: ignore η, so y = Ax.
◮ Reconstruction accuracy proportional to model accuracy.
◮ m = Θ(k log(n/k)) suffices for (1). ◮ Such an
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 8 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 33
◮ Better structural understanding should give fewer measurements. Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 33
◮ Better structural understanding should give fewer measurements.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 33
◮ Better structural understanding should give fewer measurements.
◮ Deep convolutional neural networks. Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 33
◮ Better structural understanding should give fewer measurements.
◮ Deep convolutional neural networks. ◮ In particular: generative models. Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 9 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 10 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 11 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 11 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 11 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 11 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 11 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 11 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 11 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 11 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 11 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 33
◮ G is a d-layer ReLU-based neural network. Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 33
◮ G is a d-layer ReLU-based neural network. ◮ When A is random Gaussian matrix. Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 33
◮ G is a d-layer ReLU-based neural network. ◮ When A is random Gaussian matrix.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 33
◮ G is a d-layer ReLU-based neural network. ◮ When A is random Gaussian matrix.
◮ For any Lipschitz G, m = O(k log L) suffices. Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 33
◮ G is a d-layer ReLU-based neural network. ◮ When A is random Gaussian matrix.
◮ For any Lipschitz G, m = O(k log rL
δ ) suffices.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 33
◮ G is a d-layer ReLU-based neural network. ◮ When A is random Gaussian matrix.
◮ For any Lipschitz G, m = O(k log rL
δ ) suffices.
◮ Morally the same O(kd log n) bound. Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 12 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 13 / 33
◮ Compared to O(k log n) for sparsity-based methods. Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 13 / 33
◮ Compared to O(k log n) for sparsity-based methods. ◮ k here can be much smaller Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 13 / 33
◮ Compared to O(k log n) for sparsity-based methods. ◮ k here can be much smaller
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 13 / 33
◮ Compared to O(k log n) for sparsity-based methods. ◮ k here can be much smaller
◮ Just like for training, no proof this converges Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 13 / 33
◮ Compared to O(k log n) for sparsity-based methods. ◮ k here can be much smaller
◮ Just like for training, no proof this converges ◮ Approximate solution approximately gives (3) Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 13 / 33
◮ Compared to O(k log n) for sparsity-based methods. ◮ k here can be much smaller
◮ Just like for training, no proof this converges ◮ Approximate solution approximately gives (3) ◮ Can check that
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 13 / 33
◮ Compared to O(k log n) for sparsity-based methods. ◮ k here can be much smaller
◮ Just like for training, no proof this converges ◮ Approximate solution approximately gives (3) ◮ Can check that
◮ In practice, optimization error is negligible. Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 13 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 14 / 33
◮ k-sparse + more =
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 14 / 33
◮ k-sparse + more =
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 14 / 33
◮ k-sparse + more =
◮ Conditions on manifold for which recovery is possible. Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 14 / 33
◮ k-sparse + more =
◮ Conditions on manifold for which recovery is possible.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 14 / 33
◮ k-sparse + more =
◮ Conditions on manifold for which recovery is possible.
◮ Train deep network to encode and/or decode. Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 14 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 15 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 15 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 15 / 33
Original Lasso VAE Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 15 / 33
100 200 300 400 500 Number of measurements 0.00 0.02 0.04 0.06 0.08 0.10 0.12 Reconstruction error (per pixel) Lasso VAE
500 1000 1500 2000 2500 Number of measurements 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 Reconstruction error (per pixel) Lasso (DCT) Lasso (Wavelet) DCGAN
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 16 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 33
◮ Then analogous to proof for sparsity:
k
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 33
◮ Then analogous to proof for sparsity:
k
◮ So dk log n Gaussian measurements suffice. Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 33
◮ Then analogous to proof for sparsity:
k
◮ So dk log n Gaussian measurements suffice.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 33
◮ Then analogous to proof for sparsity:
k
◮ So dk log n Gaussian measurements suffice.
◮ Each layer is z → ReLU(Aiz). Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 33
◮ Then analogous to proof for sparsity:
k
◮ So dk log n Gaussian measurements suffice.
◮ Each layer is z → ReLU(Aiz). Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 33
◮ Then analogous to proof for sparsity:
k
◮ So dk log n Gaussian measurements suffice.
◮ Each layer is z → ReLU(Aiz). ◮ ReLU(y)i =
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 33
◮ Then analogous to proof for sparsity:
k
◮ So dk log n Gaussian measurements suffice.
◮ Each layer is z → ReLU(Aiz). ◮ ReLU(y)i =
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 33
◮ Then analogous to proof for sparsity:
k
◮ So dk log n Gaussian measurements suffice.
◮ Each layer is z → ReLU(Aiz). ◮ ReLU(y)i =
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 33
◮ Then analogous to proof for sparsity:
k
◮ So dk log n Gaussian measurements suffice.
◮ Each layer is z → ReLU(Aiz). ◮ ReLU(y)i =
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 17 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 18 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 18 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 18 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 18 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 18 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 18 / 33
◮ 1 + (1 + 2 + . . . + n) = n2+n+2
2
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 18 / 33
◮ 1 + (1 + 2 + . . . + n) = n2+n+2
2
◮ n half-spaces divide Rk into less than nk regions. Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 18 / 33
◮ 1 + (1 + 2 + . . . + n) = n2+n+2
2
◮ n half-spaces divide Rk into less than nk regions.
Compressed Sensing and Generative Models 18 / 33
◮ 1 + (1 + 2 + . . . + n) = n2+n+2
2
◮ n half-spaces divide Rk into less than nk regions.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 18 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 19 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 19 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 19 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 19 / 33
◮ O(1) approximation factor Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 19 / 33
◮ O(1) approximation factor ⇐
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 19 / 33
◮ O(1) approximation factor ⇐
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 19 / 33
◮ O(1) approximation factor ⇐
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 19 / 33
◮ O(1) approximation factor ⇐
◮ The optimization has no local minima [Hand-Voroninski] Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 19 / 33
◮ O(1) approximation factor ⇐
◮ The optimization has no local minima [Hand-Voroninski] ◮ L = O(1) not nd so m = O(k log n), if k ≪ n/d. Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 19 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 20 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 20 / 33
◮ A is diagonal, zeros and ones. Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 20 / 33
◮ A is diagonal, zeros and ones.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 20 / 33
◮ A is diagonal, zeros and ones.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 20 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 21 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 22 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 22 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 22 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 22 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 23 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 23 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 23 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 23 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 23 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 23 / 33
Z G Generated image Real image D Real?
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 24 / 33
Z G Generated image Real image D Real?
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 24 / 33
Z G Generated image Real image D Real?
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 24 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 25 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 25 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 25 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 25 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 25 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 25 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 25 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 26 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 26 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 26 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 26 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 27 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 27 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 27 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 27 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 28 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 28 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 28 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 29 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 29 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 29 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 29 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 30 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 30 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 30 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 30 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 30 / 33
10 50 100 200 300 400 500 750 Number of measurements (m) 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 Reconstruction error (per pixel)
AmbientGAN (ours) Lasso
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 31 / 33
10 50 100 200 300 400 500 750 Number of measurements (m) 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 Reconstruction error (per pixel)
AmbientGAN (ours) Lasso
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 31 / 33
10 50 100 200 300 400 500 750 Number of measurements (m) 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 Reconstruction error (per pixel)
AmbientGAN (ours) Lasso
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 31 / 33
10 50 100 200 300 400 500 750 Number of measurements (m) 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 Reconstruction error (per pixel)
AmbientGAN (ours) Lasso
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 31 / 33
Z G Generated image D Real measurement Simulated measurement f
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 32 / 33
Z G Generated image D Real measurement Simulated measurement f
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 32 / 33
Z G Generated image D Real measurement Simulated measurement f
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 32 / 33
Z G Generated image D Real measurement Simulated measurement f
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 32 / 33
Z G Generated image D Real measurement Simulated measurement f
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 32 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 33 / 33
◮ Can use lossy measurements to learn a generative model of the
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 33 / 33
◮ Can use lossy measurements to learn a generative model of the
◮ Can use a generative model to reconstruct from lossy measurements. Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 33 / 33
◮ Can use lossy measurements to learn a generative model of the
◮ Can use a generative model to reconstruct from lossy measurements.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 33 / 33
◮ Can use lossy measurements to learn a generative model of the
◮ Can use a generative model to reconstruct from lossy measurements.
◮ Take Gaussian blur plus Gaussian noise. Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 33 / 33
◮ Can use lossy measurements to learn a generative model of the
◮ Can use a generative model to reconstruct from lossy measurements.
◮ Take Gaussian blur plus Gaussian noise. ◮ Wiener filter before GAN: lose frequencies beyond O(1) standard
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 33 / 33
◮ Can use lossy measurements to learn a generative model of the
◮ Can use a generative model to reconstruct from lossy measurements.
◮ Take Gaussian blur plus Gaussian noise. ◮ Wiener filter before GAN: lose frequencies beyond O(1) standard
◮ With N data points, can we learn log N standard deviations? Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 33 / 33
◮ Can use lossy measurements to learn a generative model of the
◮ Can use a generative model to reconstruct from lossy measurements.
◮ Take Gaussian blur plus Gaussian noise. ◮ Wiener filter before GAN: lose frequencies beyond O(1) standard
◮ With N data points, can we learn log N standard deviations?
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 33 / 33
◮ Can use lossy measurements to learn a generative model of the
◮ Can use a generative model to reconstruct from lossy measurements.
◮ Take Gaussian blur plus Gaussian noise. ◮ Wiener filter before GAN: lose frequencies beyond O(1) standard
◮ With N data points, can we learn log N standard deviations?
◮ Lipschitz parameter at initialization is much smaller than nd... Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 33 / 33
◮ Can use lossy measurements to learn a generative model of the
◮ Can use a generative model to reconstruct from lossy measurements.
◮ Take Gaussian blur plus Gaussian noise. ◮ Wiener filter before GAN: lose frequencies beyond O(1) standard
◮ With N data points, can we learn log N standard deviations?
◮ Lipschitz parameter at initialization is much smaller than nd... ◮ ...but we don’t actually expect it to be small after training. Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 33 / 33
◮ Can use lossy measurements to learn a generative model of the
◮ Can use a generative model to reconstruct from lossy measurements.
◮ Take Gaussian blur plus Gaussian noise. ◮ Wiener filter before GAN: lose frequencies beyond O(1) standard
◮ With N data points, can we learn log N standard deviations?
◮ Lipschitz parameter at initialization is much smaller than nd... ◮ ...but we don’t actually expect it to be small after training.
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 33 / 33
◮ Can use lossy measurements to learn a generative model of the
◮ Can use a generative model to reconstruct from lossy measurements.
◮ Take Gaussian blur plus Gaussian noise. ◮ Wiener filter before GAN: lose frequencies beyond O(1) standard
◮ With N data points, can we learn log N standard deviations?
◮ Lipschitz parameter at initialization is much smaller than nd... ◮ ...but we don’t actually expect it to be small after training.
◮ Computational problem: pseudodeterminant of Jacobian matrix. Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 33 / 33
◮ Can use lossy measurements to learn a generative model of the
◮ Can use a generative model to reconstruct from lossy measurements.
◮ Take Gaussian blur plus Gaussian noise. ◮ Wiener filter before GAN: lose frequencies beyond O(1) standard
◮ With N data points, can we learn log N standard deviations?
◮ Lipschitz parameter at initialization is much smaller than nd... ◮ ...but we don’t actually expect it to be small after training.
◮ Computational problem: pseudodeterminant of Jacobian matrix. ◮ Speed-up with linear sketching? Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 33 / 33
◮ Can use lossy measurements to learn a generative model of the
◮ Can use a generative model to reconstruct from lossy measurements.
◮ Take Gaussian blur plus Gaussian noise. ◮ Wiener filter before GAN: lose frequencies beyond O(1) standard
◮ With N data points, can we learn log N standard deviations?
◮ Lipschitz parameter at initialization is much smaller than nd... ◮ ...but we don’t actually expect it to be small after training.
◮ Computational problem: pseudodeterminant of Jacobian matrix. ◮ Speed-up with linear sketching?
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 33 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 34 / 33
Ashish Bora, Ajil Jalal, Eric Price, Alex Dimakis (UT Austin) Compressed Sensing and Generative Models 35 / 33