Graphical Models Graphical Models Monte-Carlo Inference Siamak - - PowerPoint PPT Presentation

graphical models graphical models
SMART_READER_LITE
LIVE PREVIEW

Graphical Models Graphical Models Monte-Carlo Inference Siamak - - PowerPoint PPT Presentation

Graphical Models Graphical Models Monte-Carlo Inference Siamak Ravanbakhsh Winter 2018 Learning objectives Learning objectives the relationship between sampling and inference sampling from univariate distributions Monte Carlo sampling in


slide-1
SLIDE 1

Graphical Models Graphical Models

Monte-Carlo Inference

Siamak Ravanbakhsh Winter 2018

slide-2
SLIDE 2

Learning objectives Learning objectives

the relationship between sampling and inference sampling from univariate distributions Monte Carlo sampling in graphical models

slide-3
SLIDE 3

Mote Carlo Mote Carlo inference inference

calculating marginals p(x =

) = p( , x , … , x )

1

x ¯1 ∑x ,…,x

2 n

x ¯1

2 n

slide-4
SLIDE 4

Mote Carlo Mote Carlo inference inference

calculating marginals p(x =

) = p( , x , … , x )

1

x ¯1 ∑x ,…,x

2 n

x ¯1

2 n

p(x = ) ≈ I(X = )

1

x ¯1

L 1 ∑l 1 (l)

x ¯1

approximate it by sampling X

∼ p(x)

(l)

slide-5
SLIDE 5

Mote Carlo Mote Carlo inference inference

calculating marginals p(x =

) = p( , x , … , x )

1

x ¯1 ∑x ,…,x

2 n

x ¯1

2 n

p(x = ) ≈ I(X = )

1

x ¯1

L 1 ∑l 1 (l)

x ¯1

approximate it by sampling X

∼ p(x)

(l)

inference in exponential family is about finding the mean parameters using L samples (particles)

μ = E [ψ(x)]

p (x) = exp(⟨θ, ψ⟩ − A(θ))

θ

μ ≈ ψ(X )

L 1 ∑l (l)

slide-6
SLIDE 6

Sampling from Sampling from categorical categorical dist. dist.

access to pseudo random number generator for given

X ∼ U(0, 1)

p(X = d) = p ∀1 ≤ d ≤ D

d

p1 p2 p6

1

generate and see where it falls

X ∼ U(0, 1)

use binary search O(log(D))

slide-7
SLIDE 7

Transforming Transforming probability densities probability densities

given a random variable what is the prob. density of ?

X ∼ pX Y = ϕ(X)

Y ∼ p (y) = p (ϕ (y))∣ ∣

Y X −1 dy dϕ (y)

−1

corresponding x how changes the volume around each point y in multivariate case: determinant of the Jacobian matrix

(bonus)

ϕ ϕ

image: wikipedia

slide-8
SLIDE 8

Inverse transform Inverse transform sampling sampling

let be uniform given a density

X

p = U(0, 1)

X

pY

images: work.thaslwanter.at, Murphy's book

slide-9
SLIDE 9

Inverse transform Inverse transform sampling sampling

let be uniform given a density let be its CDF

X

p = U(0, 1)

X

pY

F (y) = P(Y < y)

Y

FY

images: work.thaslwanter.at, Murphy's book

slide-10
SLIDE 10

Inverse transform Inverse transform sampling sampling

let be uniform given a density let be its CDF transform X using what is the density of ?

X

p = U(0, 1)

X

ϕ(X) = F (X)

Y −1

pY

F (y) = P(Y < y)

Y

FY

images: work.thaslwanter.at, Murphy's book

Y = ϕ(X)

slide-11
SLIDE 11

Inverse transform Inverse transform sampling sampling

let be uniform given a density let be its CDF transform X using what is the density of ?

X

p = U(0, 1)

X

ϕ(X) = F (X)

Y −1

pY

F (y) = P(Y < y)

Y

FY

images: work.thaslwanter.at, Murphy's book

Y = ϕ(X)

Y ∼ p (ϕ (y))∣ ∣ = p (F(y))∣ ∣

X −1 dy dϕ (y)

−1

X dy dF(y)

constant:

p (y)

Y

p = U(0, 1)

X

X Y FY

slide-12
SLIDE 12

Inverse transform sampling: Inverse transform sampling: example example

Expoenential distribution

image:wikipedia

p(y) = λe−λy F (y) = 1 − e

Y −λy

p(y) y

calculate the inverse CDF:

F (x) = − ln(1 − x)

Y −1 λ 1

y

x

FY

slide-13
SLIDE 13

Sampling in graphical models Sampling in graphical models

ancestral sampling

for Bayes-nets

slide-14
SLIDE 14

Sampling in graphical models Sampling in graphical models

ancestral sampling

for Bayes-nets find a topological ordering

e.g., D,I,G,S,L or I,S,D,G,L

sample by conditioning on parents

G ∼ P(g ∣ I, D)

(how?)

slide-15
SLIDE 15

Introducing evidence Introducing evidence

what if we have an evidence E.g., how to sample from the posterior?

p(D, I, S, L ∣ G = g )

slide-16
SLIDE 16

Introducing evidence Introducing evidence

rejection sampling

what if we have an evidence E.g., how to sample from the posterior? find a topological ordering sample by conditioning on parents

  • nly keep samples compatible with evidence

wasteful if evidence has a low probability

p(D, I, S, L ∣ G = g )

(G = g )

slide-17
SLIDE 17

Rejection sampling Rejection sampling

general form

p(x) = (x)

Z 1 p

~

to sample from use a proposal distribution such that everywhere sample accept the sample with probability

q(x)

Mq(x) > (x) p ~

X ∼ q(x)

Mq(x) (x) p ~

image: Murphy's book

slide-18
SLIDE 18

Rejection sampling Rejection sampling

general form

p(x) = (x)

Z 1 p

~

to sample from use a proposal distribution such that everywhere sample accept the sample with probability

q(x)

Mq(x) > (x) p ~

X ∼ q(x)

Mq(x) (x) p ~

image: Murphy's book

what is the probability of acceptance?

for high-dimensional dists. becomes small! rejection sampling becomes wasteful

q(x) dx = ∫x

Mq(x) (x) p ~ M Z M Z

slide-19
SLIDE 19

Likelihood weighting Likelihood weighting

what if we have an evidence? E.g., how to sample from the posterior?

p(D, I, S, L ∣ G = g )

find a topological ordering assign a weight to each particle sample by conditioning on parents when sampling an observed variable set it to its observed value update the sample's weight

w ← 1

(l)

G = g1

w ← w × p(G = g ∣ D = d , I = i )

(l) (l) 1 (l) (l)

current assignments to parents

slide-20
SLIDE 20

Likelihood weighting Likelihood weighting

what if we have an evidence? E.g., how to sample from the posterior?

p(D, I, S, L ∣ G = g )

using weighted particles for inference:

p(S = s ∣ G = g ) =

1 w ∑l

l

w I(S =s ) ∑l

l (l)

slide-21
SLIDE 21

Likelihood weighting Likelihood weighting

what if we have an evidence? E.g., how to sample from the posterior?

p(D, I, S, L ∣ G = g )

using weighted particles for inference:

p(S = s ∣ G = g ) =

1 w ∑l

l

w I(S =s ) ∑l

l (l)

special case of importance sampling

slide-22
SLIDE 22

Unnormalized Unnormalized importance sampling importance sampling

Objective: Monte Carlo estimate difficult to sample from p (yet easy to evaluate) use a proposal distribution q :

E [f(x)]

p

p(x) > 0 ⇒ q(x) > 0

p(x) q(x) f(x) x

image: Bishop's book

slide-23
SLIDE 23

Unnormalized Unnormalized importance sampling importance sampling

Objective: Monte Carlo estimate difficult to sample from p (yet easy to evaluate) use a proposal distribution q :

E [f(x)]

p

E [f(x)] = p(x)f(x)dx = q(x) f(x)dx = E [ f(x)]

p

∫x ∫x

q(x) p(x) q q(x) p(x)

since

p(x) > 0 ⇒ q(x) > 0

p(x) q(x) f(x) x

image: Bishop's book

slide-24
SLIDE 24

Unnormalized Unnormalized importance sampling importance sampling

Objective: Monte Carlo estimate difficult to sample from p (yet easy to evaluate) use a proposal distribution q :

E [f(x)]

p

E [f(x)] = p(x)f(x)dx = q(x) f(x)dx = E [ f(x)]

p

∫x ∫x

q(x) p(x) q q(x) p(x)

sample assign an importance sampling weight since

X ∼ q(x)

l

w(X ) =

(l) q(X )

(l)

p(X )

(l)

p(x) > 0 ⇒ q(x) > 0

p(x) q(x) f(x) x

image: Bishop's book

slide-25
SLIDE 25

Unnormalized Unnormalized importance sampling importance sampling

Objective: Monte Carlo estimate difficult to sample from p (yet easy to evaluate) use a proposal distribution q :

E [f(x)]

p

E [f(x)] = p(x)f(x)dx = q(x) f(x)dx = E [ f(x)]

p

∫x ∫x

q(x) p(x) q q(x) p(x)

sample assign an importance sampling weight since

X ∼ q(x)

l

w(X ) =

(l) q(X )

(l)

p(X )

(l)

E [f(x)] ≈ w(X )f(X )

p L 1 ∑l (l) (l)

p(x) > 0 ⇒ q(x) > 0

is an unbiased estimator p(x) q(x) f(x) x

image: Bishop's book

can be more efficient than sampling from p itself! (why?)

slide-26
SLIDE 26

normalized normalized importance sampling importance sampling

What if we can evaluate p, up to a constant?

Examples

posterior in directed models prior in undirected models

p(x ∣ E = e) = p(x, e)

p(e) 1

p(x) = ϕ (x )

Z 1 ∏I I I

p(x) = (x)

Z 1 p

~

slide-27
SLIDE 27

normalized normalized importance sampling importance sampling

E [f(x)] = p(x)f(x)dx = q(x) f(x)dx = E [w(x)f(x)] =

p

∫x

Z 1 ∫x q(x) (x) p ~ Z 1 q E [w(x)]

q

E [w(x)f(x)]

q

define

E [w(x)] = (x)dx = Z

q

∫x p ~

What if we can evaluate p, up to a constant?

Examples

posterior in directed models prior in undirected models

p(x ∣ E = e) = p(x, e)

p(e) 1

p(x) = ϕ (x )

Z 1 ∏I I I

p(x) = (x)

Z 1 p

~

w(x) = q(x)

(x) p ~

then since

slide-28
SLIDE 28

normalized normalized importance sampling importance sampling

E [f(x)] = p(x)f(x)dx = q(x) f(x)dx = E [w(x)f(x)] =

p

∫x

Z 1 ∫x q(x) (x) p ~ Z 1 q E [w(x)]

q

E [w(x)f(x)]

q

define

E [w(x)] = (x)dx = Z

q

∫x p ~

What if we can evaluate p, up to a constant?

Examples

posterior in directed models prior in undirected models

p(x ∣ E = e) = p(x, e)

p(e) 1

p(x) = ϕ (x )

Z 1 ∏I I I

p(x) = (x)

Z 1 p

~

w(x) = q(x)

(x) p ~

then since sample assign an importance sampling weight

X ∼ q(x)

(l)

w(X ) =

(l) q(X )

(l)

(X ) p ~

(l)

E [f(x)] ≈

p w(X ) ∑l

(l)

w(X )f(X ) ∑l

(l) (l)

is a biased estimator (e.g., consider L=1)

slide-29
SLIDE 29

Revisiting likelihood weighting Revisiting likelihood weighting

likelihood weighting:

p(S = s ∣ G = g , I = i ) =

2 1 w ∑l

l

w I(S =s ) ∑l

l (l)

equivalent to:

slide-30
SLIDE 30

Revisiting likelihood weighting Revisiting likelihood weighting

likelihood weighting:

p(S = s ∣ G = g , I = i ) =

2 1 w ∑l

l

w I(S =s ) ∑l

l (l)

mutilated Bayes-net as proposal q

equivalent to:

slide-31
SLIDE 31

Revisiting likelihood weighting Revisiting likelihood weighting

likelihood weighting:

p(S = s ∣ G = g , I = i ) =

2 1 w ∑l

l

w I(S =s ) ∑l

l (l)

mutilated Bayes-net as proposal q

w = = p(G = g ∣ I = i , D = d ) × P(I = i )

l q(X) (X) p ~ 2 (l) (l) 1

equivalent to:

similar to initial algorithm for likelihood weighting

slide-32
SLIDE 32

Revisiting likelihood weighting Revisiting likelihood weighting

likelihood weighting:

p(S = s ∣ G = g , I = i ) =

2 1 w ∑l

l

w I(S =s ) ∑l

l (l)

mutilated Bayes-net as proposal q

w = = p(G = g ∣ I = i , D = d ) × P(I = i )

l q(X) (X) p ~ 2 (l) (l) 1

equivalent to:

similar to initial algorithm for likelihood weighting

evidence only affects sampling for the descendants what if all evidence appears at leaf nodes?

slide-33
SLIDE 33

Summary Summary

Monte-carlo sampling for approximate inference: sampling from univariates:

categorical distribution inverse transform sampling

marginals in directed models:

ancestral sampling

more sophisticated: (incorporating evidence)

rejection sampling importance sampling (likelihood weighting)