Approximative Bayesian inference
Approximate Bayesian inference for latent Gaussian models
H˚ avard Rue1 Department of Mathematical Sciences NTNU, Norway December 4, 2009
1With S.Martino/N.Chopin
Approximate Bayesian inference for latent Gaussian models avard Rue - - PowerPoint PPT Presentation
Approximative Bayesian inference Approximate Bayesian inference for latent Gaussian models avard Rue 1 H Department of Mathematical Sciences NTNU, Norway December 4, 2009 1 With S.Martino/N.Chopin Approximative Bayesian inference Overview
Approximative Bayesian inference
H˚ avard Rue1 Department of Mathematical Sciences NTNU, Norway December 4, 2009
1With S.Martino/N.Chopin
Approximative Bayesian inference Overview Latent Gaussian models
Latent Gaussian models have often the following hierarchical structure
π(x, θ | y) ∝ π(θ) π(x | θ)
π(yi | xi, θ)
Approximative Bayesian inference Overview Latent Gaussian models
Latent Gaussian models have often the following hierarchical structure
π(x, θ | y) ∝ π(θ) π(x | θ)
π(yi | xi, θ)
Approximative Bayesian inference Overview Latent Gaussian models
Latent Gaussian models have often the following hierarchical structure
π(x, θ | y) ∝ π(θ) π(x | θ)
π(yi | xi, θ)
Approximative Bayesian inference Overview Latent Gaussian models
Latent Gaussian models have often the following hierarchical structure
π(x, θ | y) ∝ π(θ) π(x | θ)
π(yi | xi, θ)
Approximative Bayesian inference Overview Latent Gaussian models
g(µi) =
fj(zji) +
βj zji + ǫi where
Observations {yi} from an exponential family with mean {µi}
Approximative Bayesian inference Overview Latent Gaussian models
Examples 1D Smoothing count data, general spline smoothing, semi-parametric regression, GLM(M), GAM(M), etc 2D Disease mapping, log-Gaussian Cox-processes, model-based geostatistics, 1D-models with spatial effect(s) 3D Time-series of images, spatio-temporal models. Features
but often Markov.
non-Gaussian.
Approximative Bayesian inference Overview Latent Gaussian models
Examples 1D Smoothing count data, general spline smoothing, semi-parametric regression, GLM(M), GAM(M), etc 2D Disease mapping, log-Gaussian Cox-processes, model-based geostatistics, 1D-models with spatial effect(s) 3D Time-series of images, spatio-temporal models. Features
but often Markov.
non-Gaussian.
Approximative Bayesian inference Overview Latent Gaussian models
Examples 1D Smoothing count data, general spline smoothing, semi-parametric regression, GLM(M), GAM(M), etc 2D Disease mapping, log-Gaussian Cox-processes, model-based geostatistics, 1D-models with spatial effect(s) 3D Time-series of images, spatio-temporal models. Features
but often Markov.
non-Gaussian.
Approximative Bayesian inference Overview Latent Gaussian models
Examples 1D Smoothing count data, general spline smoothing, semi-parametric regression, GLM(M), GAM(M), etc 2D Disease mapping, log-Gaussian Cox-processes, model-based geostatistics, 1D-models with spatial effect(s) 3D Time-series of images, spatio-temporal models. Features
but often Markov.
non-Gaussian.
Approximative Bayesian inference Overview Latent Gaussian models
Examples 1D Smoothing count data, general spline smoothing, semi-parametric regression, GLM(M), GAM(M), etc 2D Disease mapping, log-Gaussian Cox-processes, model-based geostatistics, 1D-models with spatial effect(s) 3D Time-series of images, spatio-temporal models. Features
but often Markov.
non-Gaussian.
Approximative Bayesian inference Overview Latent Gaussian models
Examples 1D Smoothing count data, general spline smoothing, semi-parametric regression, GLM(M), GAM(M), etc 2D Disease mapping, log-Gaussian Cox-processes, model-based geostatistics, 1D-models with spatial effect(s) 3D Time-series of images, spatio-temporal models. Features
but often Markov.
non-Gaussian.
Approximative Bayesian inference Overview Examples: 1D
Approximative Bayesian inference Overview Examples: 1D
Approximative Bayesian inference Overview Examples: 1D
Approximative Bayesian inference Overview Examples: 1D
Approximative Bayesian inference Overview Examples: 2D
Disease mapping: Poisson data
Approximative Bayesian inference Overview Examples: 2D
Joint disease mapping: Poisson data
Approximative Bayesian inference Overview Examples: 2D
Spatial GLM with Binomial data
Approximative Bayesian inference Overview Examples: 2D
20 40 60 80 100 20 40 60 80 100 x [m] y [m]
Log-Gaussian Cox-process; Oaks-data
Approximative Bayesian inference Overview Examples: 2D+
Spatial logit-model with semiparametric covariates
Approximative Bayesian inference Latent Gaussian models: Characteristic features Tasks
Compute from π(x, θ | y) ∝ π(θ) π(x | θ)
π(yi | xi) the posterior marginals: π(xi | y), for some or all i and/or π(θi | y), for some or all i
Approximative Bayesian inference Latent Gaussian models: Characteristic features Our approach
using MCMC?
Approximative Bayesian inference Latent Gaussian models: Characteristic features Our approach
using MCMC?
Approximative Bayesian inference Latent Gaussian models: Characteristic features Our approach
using MCMC?
Approximative Bayesian inference Latent Gaussian models: Characteristic features Main ideas
Main ideas are simple and based on the identity π(z) = π(x, z) π(x|z) leading to
When π(x|z) is the Gaussian-approximation, this is the Laplace-approximation.
Approximative Bayesian inference Latent Gaussian models: Characteristic features Main ideas
Main ideas are simple and based on the identity π(z) = π(x, z) π(x|z) leading to
When π(x|z) is the Gaussian-approximation, this is the Laplace-approximation.
Approximative Bayesian inference Latent Gaussian models: Characteristic features Main ideas
Construct the approximations to
then we integrate π(xi|y) =
π(θj|y) =
Approximative Bayesian inference Latent Gaussian models: Characteristic features Main ideas
Construct the approximations to
then we integrate π(xi|y) =
π(θj|y) =
Approximative Bayesian inference Latent Gaussian models: Characteristic features Main ideas
Construct the approximations to
then we integrate π(xi|y) =
π(θj|y) =
Approximative Bayesian inference Gaussian Markov Random fields (GMRFs)
A Gaussian Markov random field (GMRF), x = (x1, . . . , xn)T, is a normal distributed random vector with additional Markov properties xi ⊥ xj | x−ij ⇐ ⇒ Qij = 0 where Q is the precision matrix (inverse covariance) Sparse matrices gives fast computations!
Approximative Bayesian inference The GMRF-approximation
π(x | y) ∝ exp
2xTQx +
log π(yi|xi)
2(x − µ)T(Q + diag(ci))(x − µ)
π(x|θ, y) Constructed as follows:
Markov and computational properties are preserved
Approximative Bayesian inference The GMRF-approximation
π(x | y) ∝ exp
2xTQx +
log π(yi|xi)
2(x − µ)T(Q + diag(ci))(x − µ)
π(x|θ, y) Constructed as follows:
Markov and computational properties are preserved
Approximative Bayesian inference
Approximative Bayesian inference
Background: The Laplace approximation The Laplace-approximation for π(θ|y) The Laplace-approximation for π(xi|θ, y) The Integrated nested Laplace-approximation (INLA) Summary Assessing the error Examples Stochastic volatility Longitudinal mixed effect model Log-Gaussian Cox process Extensions Model choice Automatic detection of “surprising” observations Summary and discussion Bonus
Approximative Bayesian inference
High(er) number of hyperparameters Parallel computing using OpenMP Spatial GLMs
Approximative Bayesian inference Background: The Laplace approximation
Compute and approximation to the integral
where n is the parameter going to ∞. Let x0 be the mode of g(x) and assume g(x0) = 0: g(x) = 1 2g′′(x0)(x − x0)2 + · · · .
Approximative Bayesian inference Background: The Laplace approximation
Compute and approximation to the integral
where n is the parameter going to ∞. Let x0 be the mode of g(x) and assume g(x0) = 0: g(x) = 1 2g′′(x0)(x − x0)2 + · · · .
Approximative Bayesian inference Background: The Laplace approximation
Then
n(−g′′(x0)) + · · ·
relative error(n) = 1 + O(1/n)
Approximative Bayesian inference Background: The Laplace approximation
Then
n(−g′′(x0)) + · · ·
relative error(n) = 1 + O(1/n)
Approximative Bayesian inference Background: The Laplace approximation
Then
n(−g′′(x0)) + · · ·
relative error(n) = 1 + O(1/n)
Approximative Bayesian inference Background: The Laplace approximation
Then
n(−g′′(x0)) + · · ·
relative error(n) = 1 + O(1/n)
Approximative Bayesian inference Background: The Laplace approximation
gn(x) = 1 n
n
gi(x) then the mode x0 depends on n as well.
Approximative Bayesian inference Background: The Laplace approximation
and x is multivariate, then
n| − H| where H is the hessian (matrix) at the mode Hij = ∂2 ∂xi∂xj g(x)
Approximative Bayesian inference Background: The Laplace approximation
and x is multivariate, then
n| − H| where H is the hessian (matrix) at the mode Hij = ∂2 ∂xi∂xj g(x)
Approximative Bayesian inference Background: The Laplace approximation
Approximative Bayesian inference Background: The Laplace approximation
Approximative Bayesian inference Background: The Laplace approximation
Approximative Bayesian inference Background: The Laplace approximation
Consider the general problem
then π(θ|y) = π(x, θ|y) π(x|θ, y) for any x!
Approximative Bayesian inference Background: The Laplace approximation
Consider the general problem
then π(θ|y) = π(x, θ|y) π(x|θ, y) for any x!
Approximative Bayesian inference Background: The Laplace approximation
Consider the general problem
then π(θ|y) = π(x, θ|y) π(x|θ, y) for any x!
Approximative Bayesian inference Background: The Laplace approximation
Further, π(θ|y) = π(x, θ|y) π(x|θ, y) ∝ π(θ) π(x|θ) π(y|x) π(x|θ, y) ≈ π(θ) π(x|θ) π(y|x) πG(x|θ, y)
where πG(x|θ, y) is the Gaussian approximation of π(x|θ, y) and x∗(θ) is the mode.
Approximative Bayesian inference Background: The Laplace approximation
Further, π(θ|y) = π(x, θ|y) π(x|θ, y) ∝ π(θ) π(x|θ) π(y|x) π(x|θ, y) ≈ π(θ) π(x|θ) π(y|x) πG(x|θ, y)
where πG(x|θ, y) is the Gaussian approximation of π(x|θ, y) and x∗(θ) is the mode.
Approximative Bayesian inference Background: The Laplace approximation
Further, π(θ|y) = π(x, θ|y) π(x|θ, y) ∝ π(θ) π(x|θ) π(y|x) π(x|θ, y) ≈ π(θ) π(x|θ) π(y|x) πG(x|θ, y)
where πG(x|θ, y) is the Gaussian approximation of π(x|θ, y) and x∗(θ) is the mode.
Approximative Bayesian inference Background: The Laplace approximation
Error: With n repeated measurements of the same x, then the error is
after renormalisation. Relative error is a very nice property!
Approximative Bayesian inference Background: The Laplace approximation
Error: With n repeated measurements of the same x, then the error is
after renormalisation. Relative error is a very nice property!
Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π(θ|y)
The Laplace approximation for π(θ|y) is π(θ | y) = π(x, θ|y) π(x|y, θ) (any x) ≈ π(x, θ|y)
= π(θ|y) (1)
Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π(θ|y)
The Laplace approximation
turn out to be accurate: x|y, θ appears almost Gaussian in most cases, as
Note: π(θ|y) itself does not look Gaussian. Thus, a Gaussian approximation of (θ, x) will be inaccurate.
Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π(θ|y)
The Laplace approximation
turn out to be accurate: x|y, θ appears almost Gaussian in most cases, as
Note: π(θ|y) itself does not look Gaussian. Thus, a Gaussian approximation of (θ, x) will be inaccurate.
Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π(θ|y)
The Laplace approximation
turn out to be accurate: x|y, θ appears almost Gaussian in most cases, as
Note: π(θ|y) itself does not look Gaussian. Thus, a Gaussian approximation of (θ, x) will be inaccurate.
Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π(θ|y)
The Laplace approximation
turn out to be accurate: x|y, θ appears almost Gaussian in most cases, as
Note: π(θ|y) itself does not look Gaussian. Thus, a Gaussian approximation of (θ, x) will be inaccurate.
Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π(θ|y)
The Laplace approximation
turn out to be accurate: x|y, θ appears almost Gaussian in most cases, as
Note: π(θ|y) itself does not look Gaussian. Thus, a Gaussian approximation of (θ, x) will be inaccurate.
Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π(xi |θ, y)
This task is more challenging, since
O(n). An obvious simple and fast alternative, is to use the GMRF-approximation
Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π(xi |θ, y)
This task is more challenging, since
O(n). An obvious simple and fast alternative, is to use the GMRF-approximation
Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π(xi |θ, y)
This task is more challenging, since
O(n). An obvious simple and fast alternative, is to use the GMRF-approximation
Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π(xi |θ, y)
π(x, θ|y)
−i(xi,θ)
Gaussian’,
Can be solved.
Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π(xi |θ, y)
π(x, θ|y)
−i(xi,θ)
Gaussian’,
Can be solved.
Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π(xi |θ, y)
π(x, θ|y)
−i(xi,θ)
Gaussian’,
Can be solved.
Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π(xi |θ, y)
An series expansion of the LA for π(xi|θ, y):
skewness log π(xi|θ, y) = −1 2x2
i + bxi + 1
6d x3
i + · · ·
2φ(x)Φ(ax)
Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π(xi |θ, y)
An series expansion of the LA for π(xi|θ, y):
skewness log π(xi|θ, y) = −1 2x2
i + bxi + 1
6d x3
i + · · ·
2φ(x)Φ(ax)
Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π(xi |θ, y)
An series expansion of the LA for π(xi|θ, y):
skewness log π(xi|θ, y) = −1 2x2
i + bxi + 1
6d x3
i + · · ·
2φ(x)Φ(ax)
Approximative Bayesian inference Background: The Laplace approximation The Laplace-approximation for π(xi |θ, y)
An series expansion of the LA for π(xi|θ, y):
skewness log π(xi|θ, y) = −1 2x2
i + bxi + 1
6d x3
i + · · ·
2φ(x)Φ(ax)
Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary
Step I Explore π(θ|y)
Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary
Step I Explore π(θ|y)
Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary
Step I Explore π(θ|y)
Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary
Step I Explore π(θ|y)
Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary
Step I Explore π(θ|y)
Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary
Step II For each θj
for selected values of xi
Gaussian N(xi; µi, σ2
i ) × exp(spline)
to represent the conditional marginal density.
Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary
Step II For each θj
for selected values of xi
Gaussian N(xi; µi, σ2
i ) × exp(spline)
to represent the conditional marginal density.
Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary
Step II For each θj
for selected values of xi
Gaussian N(xi; µi, σ2
i ) × exp(spline)
to represent the conditional marginal density.
Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary
Step III Sum out θj
π(θj | y)
N(xi; µi, σ2
i ) × exp(spline)
to represent π(xi | y).
Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary
Step III Sum out θj
π(θj | y)
N(xi; µi, σ2
i ) × exp(spline)
to represent π(xi | y).
Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary
Step III Sum out θj
π(θj | y)
N(xi; µi, σ2
i ) × exp(spline)
to represent π(xi | y).
Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary
Main idea
Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary
Main idea
Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary
Practical approach (high accuracy)
Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary
Practical approach (high accuracy)
Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary
Practical approach (high accuracy)
Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary
Practical approach (lower accuracy)
Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary
Practical approach (lower accuracy)
Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary
Practical approach (lower accuracy)
Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary
−4 −2 2 4 0.0 0.2 0.4 0.6 0.8 1.0 x dnorm(x)/dnorm(0)
Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Summary
Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Assessing the error
Tool 1: Compare a sequence of improved approximations
Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Assessing the error
Tool 2: Estimate the error using Monte Carlo
π(θ | y) −1 ∝ Ee
πG [exp {r(x; θ, y)}]
where r() is the sum of the log-likelihood minus the second order Taylor expansion.
Approximative Bayesian inference The Integrated nested Laplace-approximation (INLA) Assessing the error
Tool 3: Estimate the “effective” number of parameters as defined in the Deviance Information Criteria: pD(θ) = D(x; θ) − D(x; θ) and compare this with the number of observations. Low ratio is good. This criteria has theoretical justification.
Approximative Bayesian inference Examples Stochastic volatility
200 400 600 800 1000 −2 2 4
Log of the daily difference of the pound-dollar exchange rate from October 1st, 1981, to June 28th, 1985.
Approximative Bayesian inference Examples Stochastic volatility
Simple model xt | x1, . . . , xt−1, τ, φ ∼ N (φxt−1, 1/τ) where |φ| < 1 to ensure a stationary process. Observations are taken to be yt | x1, . . . , xt, µ ∼ N(0, exp(µ + xt))
Approximative Bayesian inference Examples Stochastic volatility
Simple model xt | x1, . . . , xt−1, τ, φ ∼ N (φxt−1, 1/τ) where |φ| < 1 to ensure a stationary process. Observations are taken to be yt | x1, . . . , xt, µ ∼ N(0, exp(µ + xt))
Approximative Bayesian inference Examples Stochastic volatility
Using just the first 50 data-points only, which makes the problem much harder.
Approximative Bayesian inference Examples Stochastic volatility
−10 −5 5 10 15 20 0.00 0.02 0.04 0.06 0.08 0.10
ν = logit(2φ − 1)
Approximative Bayesian inference Examples Stochastic volatility
2 4 6 0.00 0.05 0.10 0.15 0.20 0.25 0.30
log(κx)
Approximative Bayesian inference Examples Stochastic volatility
200 400 600 800 1000 −3 −2 −1 1 2 x$V1 x$V2
Predictions for µ + xt+k
Approximative Bayesian inference Examples Stochastic volatility
20 40 60 80 100 0.00 0.02 0.04 0.06 0.08 convert.dens(xx, yy, FUN = dof.trans)$x convert.dens(xx, yy, FUN = dof.trans)$y
Posterior marginal for ν.
Approximative Bayesian inference Examples Longitudinal mixed effect model
Approximative Bayesian inference Examples Longitudinal mixed effect model
Approximative Bayesian inference Examples Longitudinal mixed effect model
1.2 1.4 1.6 1.8 2.0 1 2 3 4 5
Marginals for a0
Approximative Bayesian inference Examples Longitudinal mixed effect model
5 10 15 0.0 0.1 0.2 0.3
Marginals for τb1
Approximative Bayesian inference Examples Log-Gaussian Cox process
40 80 120 160 200 20 40 60 80 100
Locations of trees of a particular type: Data comes from a 50-hectare permanent tree plot which was established in 1980 in the tropical moist forest of Barro Colorado Island in Gatun Lake in central Panama.
Approximative Bayesian inference Examples Log-Gaussian Cox process
50 100 150 200 20 40 60 80 100
Covariate: altitude
Approximative Bayesian inference Examples Log-Gaussian Cox process
50 100 150 200 20 40 60 80 100
Covariate: norm of gradient
Approximative Bayesian inference Examples Log-Gaussian Cox process
Model for log-density at each “pixel” in a 200 × 100 lattice ηi = β0 + β1c1i + β2c2i + ui + vi,
ui = 0 The spatial term is an IGMRF E(ui | u−i) = 1 20
− 2
− 1
Approximative Bayesian inference Examples Log-Gaussian Cox process
Model for log-density at each “pixel” in a 200 × 100 lattice ηi = β0 + β1c1i + β2c2i + ui + vi,
ui = 0 The spatial term is an IGMRF E(ui | u−i) = 1 20
− 2
− 1
Approximative Bayesian inference Examples Log-Gaussian Cox process
40 80 120 160 200 20 40 60 80 100
The posterior expectation of the spatial field
Approximative Bayesian inference Examples Log-Gaussian Cox process
40 80 120 160 200 20 40 60 80 100
Locations with high KLD
Approximative Bayesian inference Examples Log-Gaussian Cox process
−0.05 0.00 0.05 0.10 0.15 0.20 0.25 2 4 6 8 10
Effect of altitude
Approximative Bayesian inference Examples Log-Gaussian Cox process
2 4 6 8 10 12 0.00 0.05 0.10 0.15 0.20 0.25
Effect of norm of the gradient
Approximative Bayesian inference Extensions
Will not discuss
Approximative Bayesian inference Extensions
Will not discuss
Approximative Bayesian inference Extensions
Will not discuss
Approximative Bayesian inference Extensions Model choice
Chose/compare various model is important but difficult
Approximative Bayesian inference Extensions Model choice
Marginal likelihood is the normalising constant for π(θ|y),
π(θ)π(x|θ)π(y|x, θ)
dθ. (2) I many hierarchical GMRF models the prior is intrinsic/improper, so this is difficult to use.
Approximative Bayesian inference Extensions Model choice
Marginal likelihood is the normalising constant for π(θ|y),
π(θ)π(x|θ)π(y|x, θ)
dθ. (2) I many hierarchical GMRF models the prior is intrinsic/improper, so this is difficult to use.
Approximative Bayesian inference Extensions Model choice
Based on the deviance D(x; θ) = −2
log(yi | xi, θ) and DIC = 2 × Mean (D(x; θ)) − D(Mean(x); θ∗) This is quite easy to compute
Approximative Bayesian inference Extensions Model choice
Easy to compute using the INLA-approach π(yi | y−i) =
π(yi | xi, θ) π(xi | y−i, θ) dxi
where π(xi | y−i, θ) ∝ π(xi|y, θ) π(yi|xi, θ) Require a one-dimensional integral for each i and θ.
Approximative Bayesian inference Extensions Automatic detection of “surprising” observations
Compute Prob(ynew
i
≤ yi | y−i) Look for unusual large or small values
Approximative Bayesian inference Summary and discussion
a wide range of applications!
extremely well, way beyond my expectations!!!
Approximative Bayesian inference Summary and discussion
a wide range of applications!
extremely well, way beyond my expectations!!!
Approximative Bayesian inference Summary and discussion
a wide range of applications!
extremely well, way beyond my expectations!!!
Approximative Bayesian inference Bonus High(er) number of hyperparameters
Numerical (grid) integration is costly and costs at least 3dim(θ) Need another approach for “high-dimensional” hyperparameters.
Approximative Bayesian inference Bonus High(er) number of hyperparameters
www.wikipedia.org: In statistics, a central composite design is an experimental design, useful in response surface methodology, for building a second order (quadratic) model for the response variable without needing to use a complete three-level factorial experiment.
Approximative Bayesian inference Bonus High(er) number of hyperparameters
Approximative Bayesian inference Bonus High(er) number of hyperparameters
Dimension #Int.pts CCD #Int.pts GRID: 3dim 2 9 8 3 15 27 4 25 64 5 27 125 6 45 216 7 79 343 8 81 512 9 147 729 10 149 1000 14 285 2744 18 549 5832 22 1069 10648
Approximative Bayesian inference Bonus High(er) number of hyperparameters
Approximative Bayesian inference Bonus Parallel computing using OpenMP
Why?
Why are so few doing this?
Approximative Bayesian inference Bonus Parallel computing using OpenMP
Why?
Why are so few doing this?
Approximative Bayesian inference Bonus Parallel computing using OpenMP
The Gain/Pain-ratio is simply to low! But there is hope, due to
Approximative Bayesian inference Bonus Parallel computing using OpenMP
The Gain/Pain-ratio is simply to low! But there is hope, due to
Approximative Bayesian inference Bonus Parallel computing using OpenMP
Once upon a time, chip makers made computer chips faster every year by increasing their processing speeds. But lately, the microprocessor industry has run into some fundamental limits to those speeds.
Approximative Bayesian inference Bonus Parallel computing using OpenMP
The latest solution: Design chips with multiple processor cores.
Approximative Bayesian inference Bonus Parallel computing using OpenMP
The result: Today’s big-brained chips that can do more processing than ever before, if the software is modified to take advantage of their design.
Approximative Bayesian inference Bonus Parallel computing using OpenMP
Approximative Bayesian inference Bonus Parallel computing using OpenMP
May 13, 2007: GCC 4.2 Release Series
and Fortran compilers.
Approximative Bayesian inference Bonus Parallel computing using OpenMP
directives
Approximative Bayesian inference Bonus Parallel computing using OpenMP
care of the rest.
Approximative Bayesian inference Bonus Parallel computing using OpenMP
#pragma omp p a r a l l e l for p r i v a t e ( i ) for ( i = 0; i < n ; i++) { GMRFLib 2order approx (NULL, &bb [ i ] , &cc [ i ] , d [ i ] , mode [ i ] , i , mode , loglFunc , loglFunc arg , &( blockupdate par − >s t e p l e n ) ) ; cc [ i ] = MAX( 0 . 0 , cc [ i ] ) ; }
Approximative Bayesian inference Bonus Parallel computing using OpenMP
Approximative Bayesian inference Bonus Spatial GLMs
Model
Solve using
constraints.
Approximative Bayesian inference Bonus Spatial GLMs
Model
Solve using
constraints.
Approximative Bayesian inference Bonus Spatial GLMs
Model
Solve using
constraints.
Approximative Bayesian inference Bonus Spatial GLMs
Model
Solve using
constraints.
Approximative Bayesian inference Bonus Spatial GLMs
Model
Solve using
constraints.
Approximative Bayesian inference Bonus Spatial GLMs
−6000 −5000 −4000 −3000 −2000 −1000 −4000 −3500 −3000 −2500 −2000 −1500 −1000 −500 500 Rongelap Island with 157 measurement locations East (m) North (m)
Approximative Bayesian inference Bonus Spatial GLMs
Marginal predictions. East (m) North (m) −6000 −5000 −4000 −3000 −2000 −1000 −4000 −3500 −3000 −2500 −2000 −1500 −1000 −500 500 −1 −0.5 0.5 1 1.5 2 2.5
Approximative Bayesian inference Bonus Spatial GLMs
Marginal predicted standard deviations. East (m) North (m) −6000 −5000 −4000 −3000 −2000 −1000 −4000 −3500 −3000 −2500 −2000 −1500 −1000 −500 500 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55
Approximative Bayesian inference Bonus Spatial GLMs
Gaussian fields
Approximative Bayesian inference Bonus Spatial GLMs
Gaussian fields
Approximative Bayesian inference Bonus Spatial GLMs
Gaussian fields
Approximative Bayesian inference Bonus Spatial GLMs
Gaussian fields