Automatic Posterior Transformation for Likelihood-free Inference - - PowerPoint PPT Presentation

automatic posterior transformation for likelihood free
SMART_READER_LITE
LIVE PREVIEW

Automatic Posterior Transformation for Likelihood-free Inference - - PowerPoint PPT Presentation

Automatic Posterior Transformation for Likelihood-free Inference David S Greenberg Marcel Nonnenmacher Jakob H Macke Technical University of Munich Computational Neuroengineering Department of Electrical and Computer Engineering For many


slide-1
SLIDE 1

Automatic Posterior Transformation for Likelihood-free Inference

David S Greenberg Marcel Nonnenmacher Jakob H Macke

Technical University of Munich Computational Neuroengineering Department of Electrical and Computer Engineering

slide-2
SLIDE 2

For many important simulators, the likelihood is unavailable

Climate Cell biology

Mycoplasma Genitalium, Karr et al., 2016

Neuroscience

Homarus Americanus, Prinz et al., 2004

Cosmology

slide-3
SLIDE 3

Bayesian Inference without Likelihood Evaluation

Given a simulator, a prior on its parameters, and observed data, we aim to infer the posterior distribution on parameters. We cannot evaluate the likelihood p(data|parameters) =??? But we can sample data given parameters simulation ∼ p(data|parameters)

forward model prior over parameters measured data posterior

slide-4
SLIDE 4

Sequential Neural Posterior Estimation (SNPE)

… … …

density estimator posterior prior simulated data

forward model measured data

slide-5
SLIDE 5

Automatic Posterior Transformation vs. previous methods

Number of simulations 1000 5000 10000

SNPE-A Papamakarios & Murray, 2016

Restricts choice of proposal and density estimator, can't reuse data

SNPE-B Lueckmann et al., 2017

Importance weights limit performance

SNL Papamakarios et al., 2017

Estimates likelihood instead of posterior, requires MCMC after training

Classical ABC

Requires many more simulations Posterior estimation with flows or MDNs Simulation parameters can be freely chosen Feature learning (no summary stats) Scales to high dimesional data (10000+)

1000 100K 500M

True posterior

Automatic Posterior Transformation

slide-6
SLIDE 6

Lotka-Volterra model

50 100 150 time steps 100 200 population count predators prey Lotka, 1920

Posterior

θ1 θ4 θ3 θ2

−logp(true parameters), lower is better

5

  • 5

103 104 Number of simulations (log scale) SNPE-A SNPE-B SNL APT (ours)

slide-7
SLIDE 7

Rock-Paper-Scissors model

Reichenbach et al., 2008

Simulator is defined by a stochastic PDE. Data is 10000 dimensional. CNN-based feature learning. APT infers tight posteriors around the ground-truth parameters.

  • log p(true parameters)

APT (ours)

slide-8
SLIDE 8

Thanks!

More details at poster 238, tonight at 6:30pm in the Pacific Ballroom.

Marcel Nonnenmacher Jakob Macke

Funded by the German Research Foundation (DFG) through SFB 1233 (276693517), SFB 1089 and SPP 2041 and the German Federal Ministry of Education and Research (BMBF , project ‘ADMIMEM,’ FKZ 01IS18052 A-D).