CS 730/730W/830: Intro AI Bayesian Networks Approx. Inference - - PowerPoint PPT Presentation

cs 730 730w 830 intro ai
SMART_READER_LITE
LIVE PREVIEW

CS 730/730W/830: Intro AI Bayesian Networks Approx. Inference - - PowerPoint PPT Presentation

CS 730/730W/830: Intro AI Bayesian Networks Approx. Inference Exact Inference 1 handout: slides final blog entries were due Wheeler Ruml (UNH) Lecture 27, CS 730 1 / 15 Bayesian Networks Example Reminder Approx. Inference Exact


slide-1
SLIDE 1

CS 730/730W/830: Intro AI

Bayesian Networks

  • Approx. Inference

Exact Inference

Wheeler Ruml (UNH) Lecture 27, CS 730 – 1 / 15

1 handout: slides final blog entries were due

slide-2
SLIDE 2

Bayesian Networks

Bayesian Networks ■ Example ■ Reminder

  • Approx. Inference

Exact Inference

Wheeler Ruml (UNH) Lecture 27, CS 730 – 2 / 15

slide-3
SLIDE 3

The Alarm Domain

Bayesian Networks ■ Example ■ Reminder

  • Approx. Inference

Exact Inference

Wheeler Ruml (UNH) Lecture 27, CS 730 – 3 / 15

slide-4
SLIDE 4

Bayes Nets Reminder

Bayesian Networks ■ Example ■ Reminder

  • Approx. Inference

Exact Inference

Wheeler Ruml (UNH) Lecture 27, CS 730 – 4 / 15

In general: P(x1, . . . , xn) = P(xn|xn−1, . . . , x1)P(xn−1, . . . , x1)

slide-5
SLIDE 5

Bayes Nets Reminder

Bayesian Networks ■ Example ■ Reminder

  • Approx. Inference

Exact Inference

Wheeler Ruml (UNH) Lecture 27, CS 730 – 4 / 15

In general: P(x1, . . . , xn) = P(xn|xn−1, . . . , x1)P(xn−1, . . . , x1) =

n

  • i=1

P(xi|xi−1, . . . , x1) Bayes Net specifies independence: P(Xi|Xi−1, . . . , X1) = P(Xi|parents(Xi)) joint distribution: P(x1, . . . , xn) =

n

  • i=1

P(xi|parents(Xi)) What is distribution of X given evidence e and unobserved Y ?

slide-6
SLIDE 6

Approximate Inference

Bayesian Networks

  • Approx. Inference

■ Basic Sampling ■ Rej. Sampling ■ Likelihood Wting ■ Break Exact Inference

Wheeler Ruml (UNH) Lecture 27, CS 730 – 5 / 15

slide-7
SLIDE 7

Sampling According to the Joint Distribution

Bayesian Networks

  • Approx. Inference

■ Basic Sampling ■ Rej. Sampling ■ Likelihood Wting ■ Break Exact Inference

Wheeler Ruml (UNH) Lecture 27, CS 730 – 6 / 15

sample values for variables, working top down directly implements the semantics of the network ‘generative model’ each sample is linear time

slide-8
SLIDE 8

Rejection Sampling

Bayesian Networks

  • Approx. Inference

■ Basic Sampling ■ Rej. Sampling ■ Likelihood Wting ■ Break Exact Inference

Wheeler Ruml (UNH) Lecture 27, CS 730 – 7 / 15

What is distribution of X given evidence e and unobserved Y ? Draw worlds from the joint, rejecting those that do not match e. Look at distribution of X. each sample is linear time, but overall slow if e is unlikely

slide-9
SLIDE 9

Likelihood Weighting

Bayesian Networks

  • Approx. Inference

■ Basic Sampling ■ Rej. Sampling ■ Likelihood Wting ■ Break Exact Inference

Wheeler Ruml (UNH) Lecture 27, CS 730 – 8 / 15

What is distribution of X given evidence e and unobserved Y ? ChooseSample (e) w ← 1 for each variable Vi in topological order: if (Vi = vi) ∈ e then w ← w · P(vi|parents(vi)) else vi ← sample from P(Vi|parents(Vi)) (afterwards, normalize samples so all w’s sum to 1) uses all samples, but needs lots of samples if e are late in ordering

slide-10
SLIDE 10

Break

Bayesian Networks

  • Approx. Inference

■ Basic Sampling ■ Rej. Sampling ■ Likelihood Wting ■ Break Exact Inference

Wheeler Ruml (UNH) Lecture 27, CS 730 – 9 / 15

exam 3: calculator, review session May 4

projects

slide-11
SLIDE 11

Exact Inference in Bayesian Networks

Bayesian Networks

  • Approx. Inference

Exact Inference ■ Enumeration ■ Example ■ Var. Elim. 1 ■ Var. Elim. 2 ■ EOLQs

Wheeler Ruml (UNH) Lecture 27, CS 730 – 10 / 15

slide-12
SLIDE 12

Enumeration Over the Joint Distribution

Bayesian Networks

  • Approx. Inference

Exact Inference ■ Enumeration ■ Example ■ Var. Elim. 1 ■ Var. Elim. 2 ■ EOLQs

Wheeler Ruml (UNH) Lecture 27, CS 730 – 11 / 15

What is distribution of X given evidence e and unobserved Y ? P(X|e) = P(e|X)P(X) P(e) = αP(X, e) = α

  • y

P(X, e, y) = α

  • y

n

  • i=1

P(Vi|parents(Vi))

slide-13
SLIDE 13

Example

Bayesian Networks

  • Approx. Inference

Exact Inference ■ Enumeration ■ Example ■ Var. Elim. 1 ■ Var. Elim. 2 ■ EOLQs

Wheeler Ruml (UNH) Lecture 27, CS 730 – 12 / 15

P(B|j, m) = P(j, m|B)P(B) P(j, m) = αP(B, j, m) = α

  • e
  • a

P(B, e, a, j, m) = α

  • e
  • a

n

  • i=1

P(Vi|parents(Vi)) P(b|j, m) = α

  • e
  • a

P(b)P(e)P(a|b, e)P(j|a)P(m|a) = αP(b)

  • e

P(e)

  • a

P(a|b, e)P(j|a)P(m|a) [draw tree]

slide-14
SLIDE 14

Variable Elimination

Bayesian Networks

  • Approx. Inference

Exact Inference ■ Enumeration ■ Example ■ Var. Elim. 1 ■ Var. Elim. 2 ■ EOLQs

Wheeler Ruml (UNH) Lecture 27, CS 730 – 13 / 15

P(B|j, m) = αP(B)

  • e

P(e)

  • a

P(a|B, e)P(j|a)P(m|a) factors = tables = fvarsused(dimensions). eg: fA(A, B, E), fM(A) multiplying factors: table with union of variables summing reduces table

slide-15
SLIDE 15

Variable Elimination

Bayesian Networks

  • Approx. Inference

Exact Inference ■ Enumeration ■ Example ■ Var. Elim. 1 ■ Var. Elim. 2 ■ EOLQs

Wheeler Ruml (UNH) Lecture 27, CS 730 – 14 / 15

eliminating variables: eg P(J|b) P(J|b) = αP(b)

  • e

P(e)

  • a

P(a|b, e)P(J|a)

  • m

P(m|a) all vars not ancestor of query or evidence are irrelevant!

slide-16
SLIDE 16

EOLQs

Bayesian Networks

  • Approx. Inference

Exact Inference ■ Enumeration ■ Example ■ Var. Elim. 1 ■ Var. Elim. 2 ■ EOLQs

Wheeler Ruml (UNH) Lecture 27, CS 730 – 15 / 15

What question didn’t you get to ask today?

What’s still confusing?

What would you like to hear more about? Please write down your most pressing question about AI and put it in the box on your way out. Thanks!