Invertible Generative Models for Inverse Problems Mitigating - - PowerPoint PPT Presentation

invertible generative models for inverse problems
SMART_READER_LITE
LIVE PREVIEW

Invertible Generative Models for Inverse Problems Mitigating - - PowerPoint PPT Presentation

Invertible Generative Models for Inverse Problems Mitigating Representation Error and Dataset Bias M. Asim, M. Daniels, O. Leong, P . Hand, A. Ahmed Inverse Problems with Generative Models as Image Priors Inverse Problems with Generative Models


slide-1
SLIDE 1

Invertible Generative Models for Inverse Problems

  • M. Asim, M. Daniels, O. Leong, P

. Hand, A. Ahmed

Mitigating Representation Error and Dataset Bias

slide-2
SLIDE 2

Inverse Problems with Generative Models as Image Priors

slide-3
SLIDE 3

Inverse Problems with Generative Models as Image Priors

slide-4
SLIDE 4

Inverse Problems with Generative Models as Image Priors

slide-5
SLIDE 5

Inverse Problems with Generative Models as Image Priors

slide-6
SLIDE 6

Contributions

1. Trained INN priors provide SOTA performance in a variety of inverse problems
 
 
 2. Trained INN priors exhibit strong performance on out-of-distribution images
 
 
 3. Theoretical guarantees in the case of linear invertible model

slide-7
SLIDE 7

Linear Inverse Problems in Imaging

slide-8
SLIDE 8

Invertible Generative Models via Normalizing Flows

Fig 1. RealNVP (Dinh, Sohl-Dickstein, Bengio)

  • Learned invertible map

  • Maps Gaussian to signal

distribution


  • Signal is a composition of

Flow steps


  • Admits exact calculation of

image likelihood

slide-9
SLIDE 9

Central Architectural Element: affine coupling layer

Affine coupling layer: 1. Split input activations
 2. Compute learned affine transform
 3. Apply the transformation

Fig 2. RealNVP (Dinh, Sohl-Dickstein, Bengio)

Has a tractable Jacobian determinant Examples: RealNVP , GLOW

slide-10
SLIDE 10

Formulation for Denoising

MLE formulation over x -space: Proxy in z -space: Given: 1. Noisy measurements of all pixels:
 2. Trained INN:
 
 Find:

slide-11
SLIDE 11

INNs can outperform BM3D in denoising

Given: 1. Noisy measurements of all pixels:
 2. Trained INN:
 
 Find:

slide-12
SLIDE 12

Formulation for Compressed Sensing

Solve via optimization in z -space: Given: Find:

slide-13
SLIDE 13

Compressed Sensing

slide-14
SLIDE 14

INNs exhibit strong OOD performance

slide-15
SLIDE 15

INNs exhibit strong OOD performance

slide-16
SLIDE 16

Strong OOD Performance on Semantic Inpainting

slide-17
SLIDE 17

Theory for Linear Invertible Model

Theorem: Let . Given m Gaussian measurements ., the MLE estimator

  • beys
slide-18
SLIDE 18

Discussion

Why do INNs perform so well OOD? Invertibility guarantees zero representation error 
 Where does regularization occur? Explicitly by penalization or implicitly by initialization + optimization

slide-19
SLIDE 19

When is regularization helpful in CS?

High likelihood init Regularization by init + opt alg Low likelihood init Explicit regularization needed

slide-20
SLIDE 20

Why is likelihood in latent space a good proxy?

High likelihood regions in latent space generally correspond to high likelihood regions in image space

slide-21
SLIDE 21

Why is likelihood in latent space a good proxy?

High likelihood regions in latent space generally correspond to high likelihood regions in image space

slide-22
SLIDE 22

Contributions

1. Trained INN priors provide SOTA performance in a variety of inverse problems
 
 
 2. Trained INN priors exhibit strong performance on out-of-distribution images
 
 
 3. Theoretical guarantees in the case of linear invertible model