Bayesian networks in Mastermind Ji r Vomlel - - PowerPoint PPT Presentation

bayesian networks in mastermind
SMART_READER_LITE
LIVE PREVIEW

Bayesian networks in Mastermind Ji r Vomlel - - PowerPoint PPT Presentation

Bayesian networks in Mastermind Ji r Vomlel http://www.utia.cas.cz/vomlel/ 1 Contents Bayesian networks, tasks solved by them, and the junction tree propagation. An application of BNs - adaptive testing The game of


slide-1
SLIDE 1

Bayesian networks in Mastermind

Jiˇ r´ ı Vomlel

http://www.utia.cas.cz/vomlel/

1

slide-2
SLIDE 2

Contents

  • Bayesian networks, tasks solved by them, and the junction tree

propagation.

  • An application of BNs - adaptive testing
  • The game of Mastermind as an example of adaptive test
  • Bayesian networks in the game of Mastermind
  • Efficient inference using Bayesian networks for Mastermind

2

slide-3
SLIDE 3

X1 X2 P(X1) P(X2) P(X3 | X1) P(X4 | X2) P(X6 | X3, X4) P(X9 | X6) P(X8 | X7, X6) P(X5 | X1) P(X7 | X5) X5 X7 X3 X8 X6 X9 X4

P(X1, . . . , X9) =

=

P(X9|X8, . . . , X1) · P(X8|X7, . . . , X1) · . . . · P(X2|X1) · P(X1)

=

P(X9|X6) · P(X8|X7, X6) · P(X7|X5) · P(X6|X4, X3)

·P(X5|X1) · P(X4|X2) · P(X3|X1) · P(X2) · P(X1)

3

slide-4
SLIDE 4

Typical use of Bayesian networks

  • to model and explain a domain.
  • to update beliefs about states of certain variables when some
  • ther variables were observed, i.e., computing conditional

probability distributions, e.g., P(X23|X17 = yes, X54 = no).

  • to find most probable configurations of variables
  • to support decision making under uncertainty
  • to find good strategies for solving tasks in a domain with

uncertainty.

4

slide-5
SLIDE 5

Bayesian networks: junction tree propagation

(1) (3) (2) (4)

X9 X1 X1, X3, X5 X3, X5, X7 X6, X7, X8 X2, X4 X3, X4, X6 X6, X9 X1 X7 X5 X3 X4 X2 X9 X8 X1 X5 X7 X8 X3 X4 X2 X6 X3, X6, X7 X5 X3 X4 X2 X6 X9 X8 X7 X6

5

slide-6
SLIDE 6

A simple example of an adaptive test

Question of medium difficulty

Good knowledge

Difficult question Easy question

No knowledge Low knowledge Medium knowledge

wrong answer correct answer wrong answer wrong answer correct answer correct answer 6

slide-7
SLIDE 7

The game of Mastermind

Tj, Hj ... colors on the jth position in the guess and in the hidden

  • code. Let δ(A, B) equals one if A = B and zero otherwise.

Pj

=

δ(Tj, Hj) P

=

4

j=1

Pj Ci

=

4

j=1

δ(Hj, i) Gi

=

4

j=1

δ(Tj, i) Mi

=

min(Ci, Gi) C

=

  • 6

i=1

Mi

  • − P

7

slide-8
SLIDE 8

Probability over the codes

Q(H1, . . . , H4) ... the probability distribution over the possible codes. At the beginning of the game this distribution is uniform, i.e. Q(H1 = h1, . . . , H4 = h4)

=

1 64 = 1 1296 During the game we update probability Q(H1, . . . , H4) using the

  • btained evidence e and compute the conditional probability

Q(H1 = h1, . . . , H4 = h4 | e)

=

  

1 n(e)

if (h1, . . . , h4) is a possible code

  • therwise,

where n(e) is the total number of codes that are possible candidates for the hidden code.

8

slide-9
SLIDE 9

A measure of uncertainty - the Shannon entropy

A criteria suitable to measure the uncertainty about the hidden code is the Shannon entropy H(Q(H1, . . . , H4 | e))

=

h1,...,h4

Q(H1 = h1, . . . , H4 = h4 | e)

· log Q(H1 = h1, . . . , H4 = h4 | e)

, where 0 · log 0 is defined to be zero. Note that the Shannon entropy is zero if and only if the code is known.

9

slide-10
SLIDE 10

Optimal Mastermind strategies

Different criteria:

  • minimal expected length

minimal sum over all suggested sequences of length of a sequence × probability of this sequence Koyama, Lai (1993): A minimal strategy with 5625/1296 = 4.340 guesses.

  • minimal depth

minimal number of guesses in the worst case Koyama, Lai (1993): A different strategy with depth of 5 guesses.

  • most informative within a limited number of guesses

minimal sum over all suggested sequences of entropy after a sequence × probability of this sequence

10

slide-11
SLIDE 11

Bayesian network for the probabilistic Mastermind

P′ C P1 P2 P3 P4 P H1 C6 C5 C4 C3 C2 C1 M1 M2 C′ M3 H2 M4 M5 M6 G1 G2 G3 G4 G5 G6 T4 T3 T2 T1 H4 H3

11

slide-12
SLIDE 12

Bayesian network after inserting evidence

M1 P M2 M3 M4 M5 M6 P3 P1 P4 C′ P2 C6 P′ C4 C5 C C3 C2 C1 H4 H3 H2 H1 H3 H4 C1 C2 C3 C5 C4 C6 P2 P4 P1 H2 C′

after moralization (before triangulation)

C P′ H1 P3 P M1 M2 M3 M4 M5 M6

12

slide-13
SLIDE 13

Transformation by introducing an auxiliary variable to the model

Savicky, Vomlel (2004)

P M1 M2 C M3 M6 M4 M5

= ⇒

M3 P M4 M5 B M6 M1 M2 C

13

slide-14
SLIDE 14

Bayesian network after the suggested transformation

H3 H4 C1 C2 C3 C5 C4 C6 B P2 C H2 P4 P M6 P1 M5 P3 M4 C′ M3 P′ H1 M2 M1

Junction tree size:

  • without the suggested transfor-

mation

> 20,526,445

  • after the suggested transforma-

tion 214,775

14

slide-15
SLIDE 15

Summary

  • The game of Mastermind is an example of a adpative test.
  • In order to use Bayesian networks for computations we need to

exploit functional dependences in the model.

  • The suggested transformation substantially decreses

computational demands.

15