SLIDE 1 Invertible Generative Models for Inverse Problems
- M. Asim, M. Daniels, O. Leong, P
. Hand, A. Ahmed
Mitigating Representation Error and Dataset Bias
SLIDE 2
Inverse Problems with Generative Models as Image Priors
SLIDE 3
Inverse Problems with Generative Models as Image Priors
SLIDE 4
Inverse Problems with Generative Models as Image Priors
SLIDE 5
Inverse Problems with Generative Models as Image Priors
SLIDE 6
Contributions
1. Trained INN priors provide SOTA performance in a variety of inverse problems
2. Trained INN priors exhibit strong performance on out-of-distribution images
3. Theoretical guarantees in the case of linear invertible model
SLIDE 7
Linear Inverse Problems in Imaging
SLIDE 8 Invertible Generative Models via Normalizing Flows
Fig 1. RealNVP (Dinh, Sohl-Dickstein, Bengio)
- Learned invertible map
- Maps Gaussian to signal
distribution
- Signal is a composition of
Flow steps
- Admits exact calculation of
image likelihood
SLIDE 9 Central Architectural Element: affine coupling layer
Affine coupling layer: 1. Split input activations
2. Compute learned affine transform
3. Apply the transformation
Fig 2. RealNVP (Dinh, Sohl-Dickstein, Bengio)
Has a tractable Jacobian determinant Examples: RealNVP , GLOW
SLIDE 10
Formulation for Denoising
MLE formulation over x -space: Proxy in z -space: Given: 1. Noisy measurements of all pixels:
2. Trained INN:
Find:
SLIDE 11
INNs can outperform BM3D in denoising
Given: 1. Noisy measurements of all pixels:
2. Trained INN:
Find:
SLIDE 12
Formulation for Compressed Sensing
Solve via optimization in z -space: Given: Find:
SLIDE 13
Compressed Sensing
SLIDE 14
INNs exhibit strong OOD performance
SLIDE 15
INNs exhibit strong OOD performance
SLIDE 16
Strong OOD Performance on Semantic Inpainting
SLIDE 17 Theory for Linear Invertible Model
Theorem: Let . Given m Gaussian measurements ., the MLE estimator
SLIDE 18
Discussion
Why do INNs perform so well OOD? Invertibility guarantees zero representation error
Where does regularization occur? Explicitly by penalization or implicitly by initialization + optimization
SLIDE 19
When is regularization helpful in CS?
High likelihood init Regularization by init + opt alg Low likelihood init Explicit regularization needed
SLIDE 20
Why is likelihood in latent space a good proxy?
High likelihood regions in latent space generally correspond to high likelihood regions in image space
SLIDE 21
Why is likelihood in latent space a good proxy?
High likelihood regions in latent space generally correspond to high likelihood regions in image space
SLIDE 22
Contributions
1. Trained INN priors provide SOTA performance in a variety of inverse problems
2. Trained INN priors exhibit strong performance on out-of-distribution images
3. Theoretical guarantees in the case of linear invertible model