scalable training of inference networks for gaussian
play

Scalable Training of Inference Networks for Gaussian-Process Models - PowerPoint PPT Presentation

Scalable Training of Inference Networks for Gaussian-Process Models Jiaxin Shi Tsinghua University Joint work with Mohammad Emtiyaz Khan Jun Zhu Gaussian Process mean function covariance function / kernel inducing points Posterior


  1. Scalable Training of Inference Networks for Gaussian-Process Models Jiaxin Shi Tsinghua University Joint work with Mohammad Emtiyaz Khan Jun Zhu

  2. Gaussian Process mean function covariance function / kernel inducing points Posterior inference complexity, conjugate likelihoods Gaussian field Sparse variational GP [Titsias, 09; Hensman et al., 13]

  3. Inference Networks for GP Models Remove sparse assumption Data Prediction Inputs Gaussian field Inference network Observations

  4. Inference Networks for GP Models Remove sparse assumption Data Prediction Inputs Gaussian field Inference network Observations

  5. Examples of Inference Networks Bayesian neural networks: ● [Sun et al., 19] freq (s) intractable output density ○ sin weights (w) function space cos weight space Inference network architecture can be derived from ● sin the weight-space posterior Random feature expansions ○ [Cutajar, et al., 18] Deep neural nets ○

  6. Minibatch Training is Difficult Functional Variational Bayesian Neural Networks (Sun et al., 19) Measurement points Consider matching variational and true posterior processes at arbitrary ● Full batch fELBO ● Practical fELBO ● This objective is doing improper minibatch for the KL divergence term ●

  7. Scalable Training of Inference Networks for GP Models Stochastic, functional mirror descent work with the functional density directly [Dai et al., 16; Cheng & Boots, 16] ● natural gradient in the density space ○ minibatch approximation with stochastic functional gradient ○ closed-form solution as an adaptive Bayesian filter ● seeing next data point adapted prior sequentially applying Bayes’ rule is the most natural gradient ● in conjugate models: equivalent to natural gradient for exponential families ○ [Raskutti & Mukherjee, 13; Khan & Lin, 17]

  8. Scalable Training of Inference Networks for GP Models Minibatch training of inference networks student teacher an idea from filtering: bootstrap ● similar idea: temporal difference (TD) learning with function approximation ○

  9. Scalable Training of Inference Networks for GP Models Minibatch training of inference networks (Gaussian likelihood case) closed-form marginals of at locations ● equivalent to GP regression ○ (Nonconjugate case) optimize an upper bound of ●

  10. Scalable Training of Inference Networks for GP Models Measurement points vs. inducing points M=2 M=5 M=20 SVGP GPNet inducing points - expressiveness of variational approximation ● measurement points - variance of training ●

  11. Scalable Training of Inference Networks for GP Models Effect of proper minibatch training Fix underfitting ● N=100, batch size=20 FBNN, M=20 GPNet, M=20 Better performance with more measurement points ● Airline Delay (700K)

  12. Scalable Training of Inference Networks for GP Models Regression & Classification Regression benchmarks GP classification with a prior derived from infinite-width Bayesian ConvNets

  13. Poster #227 Code: https://github.com/thjashin/gp-infer-net

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend