computational approaches for stochastic shortest path on
play

Computational Approaches for Stochastic Shortest Path on Succinct - PowerPoint PPT Presentation

Computational Approaches for Stochastic Shortest Path on Succinct MDPs Krishnendu Chatterjee 1 Hongfei Fu 2 Amir Goharshady 1 Nastaran Okati 3 1 IST Austria 2 Shanghai Jiao Tong University 3 Ferdowsi University of Mashhad IJCAI 2018 Succinct MDPs


  1. Computational Approaches for Stochastic Shortest Path on Succinct MDPs Krishnendu Chatterjee 1 Hongfei Fu 2 Amir Goharshady 1 Nastaran Okati 3 1 IST Austria 2 Shanghai Jiao Tong University 3 Ferdowsi University of Mashhad IJCAI 2018

  2. Succinct MDPs

  3. Succinct MDPs A succinct MDP is an MDP described implicitly by:

  4. Succinct MDPs A succinct MDP is an MDP described implicitly by: a set of variables,

  5. Succinct MDPs A succinct MDP is an MDP described implicitly by: a set of variables, a set of rules that describe how the variables can be updated,

  6. Succinct MDPs A succinct MDP is an MDP described implicitly by: a set of variables, a set of rules that describe how the variables can be updated, a target set, consisting of valuations to the variables.

  7. Succinct MDPs A succinct MDP is an MDP described implicitly by: a set of variables, a set of rules that describe how the variables can be updated, a target set, consisting of valuations to the variables. At every time step, a rule is non-deterministically chosen to update the variables. This process continues until the target set is reached.

  8. Succinct MDPs A succinct MDP is an MDP described implicitly by: a set of variables, a set of rules that describe how the variables can be updated, a target set, consisting of valuations to the variables. At every time step, a rule is non-deterministically chosen to update the variables. This process continues until the target set is reached. We can think of a succinct MDP as a probabilistic program with the following form: while φ do Q 1 � . . . � Q k od where � denotes non-determinism and each Q i is a sequence of assignments to variables.

  9. Succinct MDPs A succinct MDP is an MDP described implicitly by: a set of variables, a set of rules that describe how the variables can be updated, a target set, consisting of valuations to the variables. At every time step, a rule is non-deterministically chosen to update the variables. This process continues until the target set is reached. We can think of a succinct MDP as a probabilistic program with the following form: while φ do Q 1 � . . . � Q k od where � denotes non-determinism and each Q i is a sequence of assignments to variables. Example: while x ≥ 1 do x := x + r � x := x − 1 od

  10. Another Example while x ≥ 1 do � i f ( 0 . 4 ) { x := x + 1 reward = 1 } { x := x − 1 } e l s e � i f ( 0 . 3 ) { x := x + 1 reward = 1 } { x := x − 1 } e l s e Figure: Gambler’s Ruin as a Succinct MDP

  11. Stochastic Shortest Path Fix an initial valuation v for the program variables. Let σ be a policy that at any point of time, given the history of the program, chooses one of the Q i ’s to be executed. We define R ∞ ( v , σ ) as the expected sum of rewards collected by the program before termination, if the program starts with the valuation v and follows the policy σ .

  12. Stochastic Shortest Path Fix an initial valuation v for the program variables. Let σ be a policy that at any point of time, given the history of the program, chooses one of the Q i ’s to be executed. We define R ∞ ( v , σ ) as the expected sum of rewards collected by the program before termination, if the program starts with the valuation v and follows the policy σ . We define infval( v ) = inf σ R ∞ ( v , σ ), and supval( v ) = sup σ R ∞ ( v , σ ), where the inf and sup are taken over all policies that guarantee finite expected termination time. We are looking for methods to obtain upper and lower bounds for both infval and supval.

  13. LUPFs and LLPFs We focus on supval, the approach for infval is similar.

  14. LUPFs and LLPFs We focus on supval, the approach for infval is similar. Linear Upper Potential Function (LUPF): Let X be the set of program variables, a function h : R X → R is an LUPF if it satisfies the following conditions:

  15. LUPFs and LLPFs We focus on supval, the approach for infval is similar. Linear Upper Potential Function (LUPF): Let X be the set of program variables, a function h : R X → R is an LUPF if it satisfies the following conditions: h is linear, 1

  16. LUPFs and LLPFs We focus on supval, the approach for infval is similar. Linear Upper Potential Function (LUPF): Let X be the set of program variables, a function h : R X → R is an LUPF if it satisfies the following conditions: h is linear, 1 the value of h at terminating valuations is bounded between 2 two fixed constants K and K ′ ,

  17. LUPFs and LLPFs We focus on supval, the approach for infval is similar. Linear Upper Potential Function (LUPF): Let X be the set of program variables, a function h : R X → R is an LUPF if it satisfies the following conditions: h is linear, 1 the value of h at terminating valuations is bounded between 2 two fixed constants K and K ′ , For every Q i and every valuation v that satisfies the loop 3 guard: h ( v ) ≥ E u ( h ( Q i ( v , u ))) + E u ( R ( u , Q i ))

  18. LUPFs and LLPFs We focus on supval, the approach for infval is similar. Linear Upper Potential Function (LUPF): Let X be the set of program variables, a function h : R X → R is an LUPF if it satisfies the following conditions: h is linear, 1 the value of h at terminating valuations is bounded between 2 two fixed constants K and K ′ , For every Q i and every valuation v that satisfies the loop 3 guard: h ( v ) ≥ E u ( h ( Q i ( v , u ))) + E u ( R ( u , Q i )) There is a fixed constant M , s.t. at each step of the program, 4 the value of h changes by at most M .

  19. LUPFs and LLPFs We focus on supval, the approach for infval is similar. Linear Upper Potential Function (LUPF): Let X be the set of program variables, a function h : R X → R is an LUPF if it satisfies the following conditions: h is linear, 1 the value of h at terminating valuations is bounded between 2 two fixed constants K and K ′ , For every Q i and every valuation v that satisfies the loop 3 guard: h ( v ) ≥ E u ( h ( Q i ( v , u ))) + E u ( R ( u , Q i )) There is a fixed constant M , s.t. at each step of the program, 4 the value of h changes by at most M . Linear Lower Potential Function (LLPF): An LLPF h is a function that satisfies the above conditions, except that condition 3 is changed to: For every v that satisfies the loop guard, there exists a Q i such that h ( v ) ≤ E u ( h ( Q i ( v , u ))) + E u ( R ( u , Q i )) .

  20. Theorem If h is an LUPF, then supval ( v ) ≤ h ( v ) − K for all valuations v ∈ R X that satisfy the loop guard. Theorem If h is an LLPF, then supval ( v ) ≥ h ( v ) − K ′ for all valuations v ∈ R X that satisfy the loop guard.

  21. Theorem If h is an LUPF, then supval ( v ) ≤ h ( v ) − K for all valuations v ∈ R X that satisfy the loop guard. Theorem If h is an LLPF, then supval ( v ) ≥ h ( v ) − K ′ for all valuations v ∈ R X that satisfy the loop guard. Sketch of Proof. Construct a stochastic process based on h . Show that it forms a supermartingale, and then apply the optional stopping theorem (OST).

  22. Synthesizing LUPFs while x ≥ 1 do � i f ( 0 . 4 ) { x := x + 1 reward = 1 } e l s e { x := x − 1 } � i f ( 0 . 3 ) { x := x + 1 reward = 1 } e l s e { x := x − 1 }

  23. Synthesizing LUPFs while x ≥ 1 do � i f ( 0 . 4 ) { x := x + 1 reward = 1 } e l s e { x := x − 1 } � i f ( 0 . 3 ) { x := x + 1 reward = 1 } e l s e { x := x − 1 } Let h : R → R be an LUPF for this example, we have: (1) ∃ λ 1 , λ 2 ∈ R ∀ x ∈ R h ( x ) = λ 1 x + λ 2 ∃ K , K ′ ∈ R (2) ∀ x ∈ [1 , 2) K ≤ h ( x ) ≤ K ′ ∀ x ∈ [1 , ∞ ) h ( x ) ≥ 0 . 4 · (1+ h ( x +1))+0 . 6 · h ( x − 1) (3) (3) ∀ x ∈ [1 , ∞ ) h ( x ) ≥ 0 . 3 · (1+ h ( x +1))+0 . 7 · h ( x − 1) (4) ∃ M ∈ [0 , ∞ ) ∀ x ∈ [1 , ∞ ) | h ( x ) − h ( x − 1) | ≤ M | h ( x ) − h ( x + 1) | ≤ M (4) and

  24. By applying Farkas lemma and solving the resulting LP with the goal of minimizing λ 1 , we get: λ 1 = M = 2 , λ 2 = K = 0 , K ′ = 4.

  25. By applying Farkas lemma and solving the resulting LP with the goal of minimizing λ 1 , we get: λ 1 = M = 2 , λ 2 = K = 0 , K ′ = 4. Therefore, we have supval( x 0 ) ≤ 2 x 0 for all initial valuations x 0 that satisfy the loop guard. Hence, in this case the problem was solved by a reduction to linear programming.

  26. Synthesizing LLPFs while x ≥ 1 do � i f ( 0 . 4 ) { x := x + 1 reward = 1 } { x := x − 1 } e l s e � i f ( 0 . 3 ) { x := x + 1 reward = 1 } { x := x − 1 } e l s e

  27. Synthesizing LLPFs while x ≥ 1 do � i f ( 0 . 4 ) { x := x + 1 reward = 1 } { x := x − 1 } e l s e � i f ( 0 . 3 ) { x := x + 1 reward = 1 } { x := x − 1 } e l s e This case is a bit more complicated. If h is an LLPF, we must have the exact same conditions as before, except that condition 3 changes to: (3 ′ ) ∀ x ∈ [1 , ∞ ) h ( x ) ≤ 0 . 4 · (1 + h ( x + 1)) + 0 . 6 · h ( x − 1) or h ( x ) ≤ 0 . 3 · (1 + h ( x + 1)) + 0 . 7 · h ( x − 1)

  28. Synthesizing LLPFs while x ≥ 1 do � i f ( 0 . 4 ) { x := x + 1 reward = 1 } { x := x − 1 } e l s e � i f ( 0 . 3 ) { x := x + 1 reward = 1 } { x := x − 1 } e l s e This case is a bit more complicated. If h is an LLPF, we must have the exact same conditions as before, except that condition 3 changes to: (3 ′ ) ∀ x ∈ [1 , ∞ ) h ( x ) ≤ 0 . 4 · (1 + h ( x + 1)) + 0 . 6 · h ( x − 1) or h ( x ) ≤ 0 . 3 · (1 + h ( x + 1)) + 0 . 7 · h ( x − 1) which is equivalent to λ 1 ≤ 2, and hence we get supval( x 0 ) ≥ 2 x 0 . So, our previous bound is tight.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend