1 Stochastic, Partially Observable Markov Decision Process (MDP) - - PowerPoint PPT Presentation

1
SMART_READER_LITE
LIVE PREVIEW

1 Stochastic, Partially Observable Markov Decision Process (MDP) - - PowerPoint PPT Presentation

Stochastic Planning Classical Planning (MDPs, Reinforcement Learning) Static Predictable Unpredictable Static CSE 473 Artificial Intelligence Environment Environment Fully Fully Observable Discrete Observable Observable Discrete


slide-1
SLIDE 1

1

CSE‐473 Artificial Intelligence

Partially‐Observable MDPS (POMDPs) Classical Planning

Environment

Static Fully Observable Predictable Discrete What action next?

Percepts Actions

Perfect Deterministic

Stochastic Planning (MDPs, Reinforcement Learning) Environment

Static Fully Observable

Unpredictable

What action next?

Percepts Actions

Observable Perfect

Stochastic

Discrete

Partially‐Observable MDPs Environment

Static

Partially Observable

Unpredictable What action next?

Percepts Actions Observable Noisy

Stochastic Discrete

Classical Planning

hell heaven

Reward 100

  • 100
  • 5
  • World deterministic
  • State observable
  • Sequential Plan

MDP‐Style Planning

hell heaven

  • 6
  • World stochastic
  • State observable
  • Policy
slide-2
SLIDE 2

2

Stochastic, Partially Observable

hell? heaven?

  • 7

sign

Markov Decision Process (MDP)

  • S:

set of states

  • A:

set of actions

  • Pr(s’|s,a): transition model
  • R(s,a,s’): reward model
  • :

discount factor

  • s0:

start state

Partially‐Observable MDP

  • S:

set of states

  • A:

set of actions

  • Pr(s’|s,a): transition model
  • R(s,a,s’): reward model
  • :

discount factor

  • s0:

start state

  • E

set of possible evidence (observations)

  • Pr(e|s)

Belief State

  • State of agent’s mind
  • Not just of world
  • 10

sign sign

50% 50%

Note: POMDP

Probs  = 1

Planning in Belief Space

sign sign

50% 50%

  • Exp. Reward: 0

For now, assume movement is deterministic

sign sign

50% 50%

sign sign

50% 50%

sign sign

50% 50%

N

  • Exp. Reward: 0

Partially‐Observable MDP

  • S:

set of states

  • A:

set of actions

  • Pr(s’|s,a): transition model
  • R(s,a,s’): reward model
  • :

discount factor

  • s0:

start state

  • E

set of possible evidence (observations)

  • Pr(e|s)
slide-3
SLIDE 3

3

Evidence Model

  • S = {swb, seb, swm, sem swul, seul swur, seur}
  • E

= {heat}

  • Pr(e|s):

Pr(heat | seb) = 1.0

sign

seb

Pr(heat | swb) = 0.2 Pr(heat | sother) = 0.0

sign g

swb

Planning in Belief Space

sign sign

100% 0%

  • heat

Pr(heat | seb) = 1.0 Pr(heat | swb) = 0.2

sign sign

50% 50% 00%

S

sign sign

17% 83% heat

Objective of a Fully Observable MDP

  • Find a policy : S → A
  • which maximizes expected discounted reward
  • given an infinite horizon
  • assuming full observability

Objective of a POMDP

  • Find a policy

: BeliefStates(S) → A A belief state is a probability distribution over states

  • which maximizes expected discounted reward
  • given an infinite horizon
  • assuming partial & noisy observability

Planning in HW 4

  • Map Estimate
  • Now “know” state
  • Solve MDP
  • 1

Best plan to eat final food?

slide-4
SLIDE 4

4

Best plan to eat final food? Problem with Planning from MAP Estimate

  • Best action for belief state over k worlds may not be

the best action in any one of those worlds 49% 51% 10% 90%

POMDPs

 In POMDPs we apply the very same idea as in MDPs.  Since the state is not observable, the agent has to

make its decisions based on the belief state which is a posterior distribution over states.

21

 Let b be the belief of the agent about the state under

consideration.

 POMDPs compute a value function over belief space:

Problem s

 Each belief is a probability distribution, thus, each value

in a POMDP is a function of an entire probability distribution.

 This is problematic, since probability distributions are

ti

22

continuous.

 How many belief states are there?  For finite worlds with finite state, action, and

measurement spaces and finite horizons, however, we can effectively represent the value functions by piecewise linear functions.

An I llustrative Exam ple

2

x

1

x

3

u

8 .

z

1

z

3

u

2 . 3 . 7 . measurements action u3 state x2 measurements state x1

1

z z

23

2

z

3

8 . 2 . 7 . 3 . payoff

1

u

2

u

1

u

2

u

100  50  100 100 actions u1, u2 payoff

2

z

W hat is Belief Space?

2

x

1

x

3

u

8 . 2

z

1

z

3

u

2 . 8 . 2 . 7 . 3 . 3 . 7 . measurements action u3 state x2 payoff measurements 1

u

2

u

1

u

2

u

100  50  100 100 actions u1, u2 payoff state x1 1

z

2

z

Value

24

P(state= x1)

slide-5
SLIDE 5

5

The Param eters of the Exam ple

 The actions u1 and u2 are terminal actions.  The action u3 is a sensing action that potentially

leads to a state transition.

 The horizon is finite and = 1.

25

Payoff in POMDPs

 In MDPs, the payoff (or return)

depended on the state of the system.

 In POMDPs, however, the true state is

not exactly known. Therefore we compute the expected

26

 Therefore, we compute the expected

payoff by integrating over all states:

Payoffs in Our Exam ple

 I f we are totally certain that we

are in state x1 and execute action u1, we receive a reward of -100

 I f, on the other hand, we definitely know that we

are in x2 and execute u1, the reward is + 100.

 I n between it is the linear combination of the

2

x

1

x

3

u

8 . 2

z

1

z

3

u

2 . 8 . 2 . 7 . 3 . 3 . 7 . measurements action u3 state x2 payoff measurements 1

u

2

u

1

u

2

u

100  50  100 100 actions u1, u2 payoff state x1 1

z

2

z

27

 I n between it is the linear combination of the

extreme values weighted by the probabilities = 100 – 200 p1

Payoffs in Our Exam ple

 I f we are totally certain that we

are in state x1 and execute action u1, we receive a reward of -100

 I f, on the other hand, we definitely know that we

are in x2 and execute u1, the reward is + 100.

 I n between it is the linear combination of the

2

x

1

x

3

u

8 . 2

z

1

z

3

u

2 . 8 . 2 . 7 . 3 . 3 . 7 . measurements action u3 state x2 payoff measurements 1

u

2

u

1

u

2

u

100  50  100 100 actions u1, u2 payoff state x1 1

z

2

z

28

 I n between it is the linear combination of the

extreme values weighted by the probabilities = 100 – 200 p1 = 150 p1 – 50

Payoffs in Our Exam ple ( 2 )

29

The Resulting Policy for T= 1

 Given a finite POMDP with time horizon = 1  Use V1(b) to determine the optimal policy.

30

 Corresponding value:

slide-6
SLIDE 6

6

Piecew ise Linearity, Convexity

 The resulting value function V1(b) is

the maximum of the three functions at each point

31

 It is piecewise linear and convex.

Pruning

 With V1(b), note that only the first two

components contribute.

32

p

 The third component can be safely

pruned

I ncreasing the Tim e Horizon

 Assume the robot can make an observation before

deciding on an action.

33

V1(b)

I ncreasing the Tim e Horizon

 What if the robot can observe before acting?  Suppose it perceives z1: p(z1 | x1)=0.7 and p(z1| x2)=0.3.  Given the obs z1 we update the belief using ...?

3 . 4 . ) 1 ( 3 . 7 . ) ( where ) ( 7 . '

1 1 1 1 1 1 1

      p p p z p z p p p

Bayes rule.

34

 Now, V1(b | z1) is given by

Value Function

V1(b)

35

b’(b| z1) V1(b| z1)

Expected Value after Measuring

 But, we do not know in advance what

the next measurement will be,

 So we must compute the expected belief

 

2 1 1 1

) | ( ) ( )] | ( [ ) (

i i

z b V z p z b V E b V

36

 

  

  

         

2 1 1 1 1 2 1 1 1 1 1 1 1 1

) | ( ) ( ) | ( ) ( ) | ( ) ( )] | ( [ ) (

i i i i i i i i i z

p x z p V z p p x z p V z p z b V z p z b V E b V

slide-7
SLIDE 7

7

Expected Value after Measuring

 But, we do not know in advance what

the next measurement will be,

 So we must compute the expected belief

37

Resulting Value Function

 The four possible combinations yield the

following function which then can be simplified and pruned.

38

Value Function

p(z1) V1(b| z1)

39

b’(b| z1) p(z2) V2(b| z2) \ bar{ V} 1(b)

State Transitions ( Prediction)

 When the agent selects u3 its state may change.  When computing the value function, we have to take

these potential state changes into account.

40

Resulting Value Function after executing u3

Taking the state transitions into account, we finally obtain.

41

Value Function after executing u3

\ bar{ V} 1(b)

42

\ bar{ V} 1(b| u3)

slide-8
SLIDE 8

8

Value Function for T= 2

 Taking into account that the agent can

either directly perform u1 or u2 or first u3 and then u1 or u2, we obtain (after pruning)

43

Graphical Representation of V2(b)

u1 optimal u2 optimal unclear

44

  • utcom e of

m easuring is im portant here

Deep Horizons

 We have now completed a full backup in belief space.  This process can be applied recursively.  The value functions for T=10 and T=20 are

45

Deep Horizons and Pruning

46

W hy Pruning is Essential

 Each update introduces additional linear

com ponents to V.

 Each m easurem ent squares the num ber of

linear com ponents.

 Thus, an unpruned value function for T= 20 includes

mo e than 10547 864 linea f nctions

47

more than 10547,864 linear functions.

 At T= 30 we have 10561,012,337 linear functions.  The pruned value functions at T= 20, in comparison,

contains only 12 linear components.

 The combinatorial explosion of linear components in

the value function are the major reason why exact solution of POMDPs is usually im practical

POMDP Approxim ations

 Point-based value iteration  QMDPs

49

 AMDPs

slide-9
SLIDE 9

9

Point-based Value I teration

 Maintains a set of example beliefs  Only considers constraints that

maximize value function for at least

50

  • ne of the examples

Point-based Value I teration

Value functions for T= 30

51

Exact value function PBVI

QMDPs

 QMDPs only consider state

uncertainty in the first step

 After that, the world becomes fully

52

, y

  • bservable.

POMDP Sum m ary

 POMDPs compute the optimal action in partially

  • bservable, stochastic domains.

 For finite horizon problems, the resulting value

functions are piecewise linear and convex. I h it ti th b f li

55

 In each iteration the number of linear

constraints grows exponentially.

 Until recently, POMDPs only applied to very

small state spaces with small numbers of possible observations and actions.

 But with PBVI, |S| = millions