Extending the CLP engine for reasoning under uncertainty Nicos - - PowerPoint PPT Presentation

extending the clp engine for reasoning under uncertainty
SMART_READER_LITE
LIVE PREVIEW

Extending the CLP engine for reasoning under uncertainty Nicos - - PowerPoint PPT Presentation

Extending the CLP engine for reasoning under uncertainty Nicos Angelopoulos nicos@cs.york.ac.uk http://www.cs.york.ac.uk/nicos Department of Computer Science University of York Ismis 2003 p.1 talk structure Logic Programming (LP)


slide-1
SLIDE 1

Extending the CLP engine for reasoning under uncertainty

Nicos Angelopoulos

nicos@cs.york.ac.uk http://www.cs.york.ac.uk/˜nicos

Department of Computer Science University of York

Ismis 2003 – p.1

slide-2
SLIDE 2

talk structure

Logic Programming (LP) Uncertainty and LP Constraint LP clp(pfd(Y)) clp(pfd(c)) Caesar’s coding experiments

Ismis 2003 – p.2

slide-3
SLIDE 3

logic programming

Used in AI for crisp problem solving and for building executable models and intelligent systems. Programs are formed from logic based rules. member( H, [H|T] ). member( El, [H|T] ) :- member( El, T ).

Ismis 2003 – p.3

slide-4
SLIDE 4

execution tree

?− member( X, [a,b,c] ). X = a X = b X = c

member( H, [H|T] ). member( El, [H|T] ) :- member( El, T ).

Ismis 2003 – p.4

slide-5
SLIDE 5

uncertainty in logic programming

Most approaches use Probability Theory but there are fundamental questions unresolved. In general, 0.5 : member( H, [H|T] ). 0.5 : member( El, [H|T] ) :- member( El, T ).

Ismis 2003 – p.5

slide-6
SLIDE 6

stochastic tree

?− member( X, [a,b,c] ). 1/4 : X = b 1/2 1/2 1/2 1/2 1/2 1/2 : X = a 1/8 : X = c

0.5 : member( H, [H|T] ). 0.5 : member( El, [H|T] ) :- member( El, T ).

Ismis 2003 – p.6

slide-7
SLIDE 7

constraints in lp

Logic Programming : execution model is inflexible, and its relational nature discourages use of state information.

Ismis 2003 – p.7

slide-8
SLIDE 8

constraints in lp

Logic Programming : execution model is inflexible, and its relational nature discourages use of state information. Constraints add specialised algorithms

Ismis 2003 – p.7

slide-9
SLIDE 9

constraints in lp

Logic Programming : execution model is inflexible, and its relational nature discourages use of state information. Constraints add specialised algorithms state information

Ismis 2003 – p.7

slide-10
SLIDE 10

constraint store

X # Y Constraint store interaction Logic Programming engine ?− Q.

Ismis 2003 – p.8

slide-11
SLIDE 11

constraints inference

?− Q. X in {a,b} X = Y = b Y in {b,c} => + X = Y

Ismis 2003 – p.9

slide-12
SLIDE 12

finite domain distributions

For discrete probabilistic models clp(pfd(Y)) extends the idea of finite domains to admit distributions. from clp(fd) X in {a, b} (i.e. X = a

  • r

X = b) to clp(pfd(Y)) p(X = a) + p(X = b)

Ismis 2003 – p.10

slide-13
SLIDE 13

finite domain distributions

For discrete probabilistic models clp(pfd(Y)) extends the idea of finite domains to admit distributions. from clp(fd) X in {a, b} (i.e. X = a

  • r

X = b) to clp(pfd(Y)) [ p(X = a) + p(X = b) ] = 1

Ismis 2003 – p.10

slide-14
SLIDE 14

constraint based integration

Execution, assembles the probabilistic model in the store according to program and query. Dedicated algorithms can be used for probabilistic inference on the model present in the store.

Ismis 2003 – p.11

slide-15
SLIDE 15

probability of predicates

pvars(E) - variables in predicate E, e - vector of finite domain elements p(ei) - probability of element ei S - a constraint store. E/e - E with variables replaced by e. The probability of predicate E with respect to store S is PS(E) =

  • ∀e

S⊢E/e

PS(e) =

  • ∀e

S⊢E/e

  • i

p(ei)

Ismis 2003 – p.12

slide-16
SLIDE 16

clp(pfd(c))

Probabilistic variable definitions Coin ∈p finite_geometric([head, tail], 2) Conditional constraint Dependent

π Qualifier

Ismis 2003 – p.13

slide-17
SLIDE 17

labelling

In clp(fd) systems labelling instantiates a group of variables to elements from their domains. In clp(pfd(Y)) the probabilities can be used to guide

  • labelling. For instance we have implemented

label(V ars, mua, Prbs, Total) Selector mua approximates best-first algorithm which instantiates a group of variables to most likely combinations.

Ismis 2003 – p.14

slide-18
SLIDE 18

Caesar’s coding

Each letter is encrypted to a random letter. Words drawn from a dictionary are encrypted. Programs try to decode

  • them. We compared a clp(fd) solution to clp(pfd(c)).

clp(fd) no probabilistic information, labelling in lexicographical order. clp(pfd(c)) distributions based on frequencies, labelling using mua.

Ismis 2003 – p.15

slide-19
SLIDE 19

proximity functions

Xi - variable for ith encoded letter Di - dictionary letter freq() - frequency of letter px(Xi, Dj) = 1/ | freq(Xi) − freq(Dj) |

  • k 1/ | freq(Xi) − freq(Dk) |

Ismis 2003 – p.16

slide-20
SLIDE 20

execution times

200 400 600 800 1000 1200 20 40 60 80 100 clp(FD) pfd

clp(pfd(c)) and clp(fd) on SICStus 3.8.6

Ismis 2003 – p.17

slide-21
SLIDE 21

bottom line

Constraint LP based techniques can be used for frameworks that support probabilistic problem solving. clp(pfd(Y)) can be used to take advantage of probabilistic information at an abstract level.

Ismis 2003 – p.18

slide-22
SLIDE 22

Monty Hall

Three curtains hiding a car and two goats. Contestant chooses an initial curtain. A close curtain opens to reveal a goat. Contestant is asked for their final choice. What is the best strategy ? Stay or Switch ?

Ismis 2003 – p.19

slide-23
SLIDE 23

Monty Hall solution

If probability of switching is Swt, (Swt = 0 for strategy Stay and Swt = 1 for Switch) then probability of win is P(γ) = 1+Swt

3

.

Ismis 2003 – p.20

slide-24
SLIDE 24

Monty Hall in clp(cfd(c))

curtains( gamma, Swt, Prb ) :- Gift ∈p uniform([a, b, c]), First ∈p uniform([a, b, c]), Reveal ∈p uniform([a, b, c]), Second ∈p uniform([a, b, c]), Reveal = Gift , Reveal = First , Second = Reveal, Second

Swt First ,

Prb is p(Second=Gift).

Ismis 2003 – p.21

slide-25
SLIDE 25

Strategy γ Query

Querying this program ?- curtains(gamma, 1/2, Prb) Prb = 1/2.

Ismis 2003 – p.22

slide-26
SLIDE 26

Strategy γ Query

Querying this program ?- curtains(gamma, 1/2, Prb) Prb = 1/2. ?- curtains(gamma, 1, Prb) Prb = 2/3.

Ismis 2003 – p.22

slide-27
SLIDE 27

Strategy γ Query

Querying this program ?- curtains(gamma, 1/2, Prb) Prb = 1/2. ?- curtains(gamma, 1, Prb) Prb = 2/3. ?- curtains(gamma, 0, Prb) Prb = 1/3.

Ismis 2003 – p.22

slide-28
SLIDE 28

clp(pfd(BN))

Other discrete probabilistic inference is possible. For instance Bayesian Networks representation and inference.

Ismis 2003 – p.23

slide-29
SLIDE 29

example BN

A C B

A = y A = n

B = y 0.80 0.10 B = n 0.20 0.90 A = y A = n C = y 0.60 0.90 C = n 0.40 0.10

Ismis 2003 – p.24

slide-30
SLIDE 30

clp(pfd(BN)) program

example_bn( A, B, C ) :- cpt(A,[],[y,n]), cpt(B,[A],[(y,y,0.8),(y,n,0.2), (n,y,0.1),(n,n,0.9)]), cpt(C,[A],[(y,y,0.6),(y,n,0.4), (n,y,0.9),(n,n,0.1)]).

Ismis 2003 – p.25

slide-31
SLIDE 31

program

example_bn( A, B, C ) :- cpt(A,[],[y,n]), cpt(B,[A],[(y,y,0.8),(y,n,0.2), (n,y,0.1),(n,n,0.9)]), cpt(C,[A],[(y,y,0.6),(y,n,0.4), (n,y,0.9),(n,n,0.1)]). ?- example_bn(X,Y,Z), evidence(X,[(y,0.8),(n,0.2)], Zy is p(Z = y).

Ismis 2003 – p.26

slide-32
SLIDE 32

program

example_bn( A, B, C ) :- cpt(A,[],[y,n]), cpt(B,[A],[(y,y,0.8),(y,n,0.2), (n,y,0.1),(n,n,0.9)]), cpt(C,[A],[(y,y,0.6),(y,n,0.4), (n,y,0.9),(n,n,0.1)]). ?- example_bn(X,Y,Z), evidence(X,[(y,0.8),(n,0.2)], Zy is p(Z = y). Zy = 0.66

Ismis 2003 – p.27

slide-33
SLIDE 33

current inference scheme

Logic engine CLP Constraints Store Probabilistic Inference & Learning

Ismis 2003 – p.28

slide-34
SLIDE 34

bottom line

Logic engine CLP Inference Probabilistic Constraint Propagation &

Ismis 2003 – p.29