path finding under uncertainty through probabilistic
play

Path Finding under Uncertainty through Probabilistic Inference - PowerPoint PPT Presentation

Path Finding under Uncertainty through Probabilistic Inference David Tolpin, Jan Willem van de Meent, Brooks Paige, Frank Wood University of Oxford June 8th, 2015 Paper: http://arxiv.org/abs/1502.07314 Slides:


  1. Path Finding under Uncertainty through Probabilistic Inference David Tolpin, Jan Willem van de Meent, Brooks Paige, Frank Wood University of Oxford June 8th, 2015 Paper: http://arxiv.org/abs/1502.07314 Slides: http://offtopia.net/ctp-pp-slides.pdf

  2. Outline Probabilistic Programming Inference Path Finding and Probabilistic Inference Stochastic Policy Learning Case Study: Canadian Traveller Problem Summary

  3. Intuition Probabilistic program: ◮ A program with random computations. ◮ Distributions are conditioned by ‘observations’. ◮ Values of certain expressions are ‘predicted’ — the output . Can be written in any language (extended by sample and observe ).

  4. Example: Model Selection (let [ ;; Model 1 dist (sample (categorical [[normal 1/4] [gamma 1/4] 2 [uniform-discrete 1/4] 3 [uniform-continuous 1/4]])) 4 a (sample (gamma 1 1)) 5 b (sample (gamma 1 1)) 6 d (dist a b)] 7 8 ;; Observations 9 (observe d 1) 10 (observe d 2) 11 (observe d 4) 12 (observe d 7) 13 14 ;; Explanation 15 (predict :d (type d)) 16 (predict :a a) 17 (predict :b b))) 18

  5. Definition A probabilistic program is a stateful deterministic computation P : ◮ Initially, P expects no arguments. ◮ On every call, P returns ◮ a distribution F , ◮ a distribution and a value ( G , y ), ◮ a value z , ◮ or ⊥ . ◮ Upon returning F , P expects x ∼ F . ◮ Upon returning ⊥ , P terminates. A program is run by calling P repeatedly until termination. The probability of each trace is | x x x | | y y | y � � p P ( x x x ) = ∝ p F i ( x i ) p G j ( y j ) i =1 j =1 .

  6. Outline Probabilistic Programming Inference Path Finding and Probabilistic Inference Stochastic Policy Learning Case Study: Canadian Traveller Problem Summary

  7. Inference Objective ◮ Continuously and infinitely generate a sequence of samples drawn from the distribution of the output expression — so that someone else puts it in good use (vague but common).

  8. Inference Objective ◮ Continuously and infinitely generate a sequence of samples drawn from the distribution of the output expression — so that someone else puts it in good use (vague but common). ◮ Approximately compute integral of the form ∞ � Φ = ϕ ( x ) p ( x ) dx −∞

  9. Inference Objective ◮ Continuously and infinitely generate a sequence of samples drawn from the distribution of the output expression — so that someone else puts it in good use (vague but common). ◮ Approximately compute integral of the form ∞ � Φ = ϕ ( x ) p ( x ) dx −∞ ◮ Suggest most probable explanation (MPE) - most likely assignment for all non-evidence variables given evidence. �

  10. [(let [dfreqs (frequencies (map :d predicts))] (plot/bar-chart (map (comp #(str/replace % #"class embang.runtime.(.*)- distribution" "$1") str first) dfreqs) (map second dfreqs) :plot-size 600 :aspect-ratio 4 :y-title "sample count")) (plot/histogram (map :a predicts) :x-title "a" :bins 30 :plot-size 250 :aspect- ratio 1.5 :y-title "sample count") Example: Inference Results (plot/histogram (map :b predicts) :x-title "b" :bins 30 :plot-size 250 :aspect- ratio 1.5)] [ 4,000 3,500 3,000 sample count 2,500 2,000 1,500 1,000 500 0 gamma normal uniform-discrete uniform-continuous 500 1,600 450 1,400 400 1,200 sample count 350 300 1,000 250 800 ] 200 600 150 400 100 200 50 0 0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5 6.0 1 2 3 4 5 6 7 8 9 a b

  11. Outline Probabilistic Programming Inference Path Finding and Probabilistic Inference Stochastic Policy Learning Case Study: Canadian Traveller Problem Summary

  12. Connection between MAP and Shortest Path Maximizing the (logarithm of) trace probability | x x x | | y y | y � � log p P ( x x ) = x log p F i ( x i ) + log p G j ( y j ) + C i =1 j =1 corresponds to finding the shortest path in a graph G = ( V , E ): ◮ V = { ( F i , x i ) } ∪ { ( G j , y j ) } . ◮ Edge costs are − log p F i ( x i ) or − log p H j ( y j ). ( G 1 , y 1 ) − log p G 1 ( y 1 ) − log p F 2 ( x 2 ) − l o g p ( F 1 x ) 1 ( F 1 , x 1 ) ( F 2 , x 2 )

  13. Marginal MAP as Policy Learning x θ is inferred. In Marginal MAP, assignment of a part of the trace x x In a probabilistic program: x θ becomes the program output z ◮ x x z z . x θ . ◮ z z is marginalized over x x \ x z x x ◮ x x θ x MAP = arg max p P ( z z z ). x θ which x θ Determining x MAP corresponds to learning a policy x x x minimizes the expected path length   | x x θ | x | y y | y � � i ( x θ E x  − log p F θ i ) − log p G j ( y j )   x θ x x \ x x  i =1 j =1

  14. Outline Probabilistic Programming Inference Path Finding and Probabilistic Inference Stochastic Policy Learning Case Study: Canadian Traveller Problem Summary

  15. Policy Learning through Probabilistic Inference Require: agent , Instances , Policies 1: instance ← Draw ( Instances ) 2: policy ← Draw ( Policies ) 3: cost ← Run ( agent , instance , policy ) 4: Observe (1, Bernoulli( e − cost )) 5: Print ( policy ) The log probability of the output policy is log p P ( policy ) = − cost ( policy ) + log p Policies ( policy ) + C When policies are drawn uniformly log p P ( policy ) = − cost ( policy ) + C ′

  16. Outline Probabilistic Programming Inference Path Finding and Probabilistic Inference Stochastic Policy Learning Case Study: Canadian Traveller Problem Summary

  17. Canadian Traveller Problem CTP is a problem finding the shortest travel distance in a graph where some edges may be blocked. Given ◮ Undirected weighted graph G = ( V , E ). ◮ The initial and the final location nodes s and t . ◮ Edge weights w : E → R . ◮ Traversability probabilities: p o : E → (0 , 1]. find the shortest travel distance from s to t — the sum of weights of all traversed edges.

  18. The Simplest CTP Instance — Two Roads Given ◮ two roads with probability being open p 1 and p 2 , ◮ costs of each road c 1 and c 2 , ◮ cost of bumping into a blocked road c b , learn the optimum policy q . (defquery tworoads 1 (loop [] 2 (let [o1 (sample (flip p1)) 3 o2 (sample (flip p2))] 4 (if (not (or o1 o2)) (recur) 5 (let [q (sample (uniform-continuous 0. 1.)) 6 s (sample (flip (- 1 q)))] 7 (let [distance (if s (if o1 c1 (+ c2 cb)) 8 (if o2 c2 (+ c1 cb)))] 9 (observe +factor+ (- distance)) 10 (predict :q q))))))) 11

  19. Learning Stochastic Policy for CTP Depth-first search based policy: ◮ the agent traverses G in depth-first order. ◮ the policy specifies the probabilities of selecting each adjacent edge in every node. Require: CTP( G , s , t , w , p ) 1: for v ∈ V do 1 deg( v ) )) 1 policy ( v ) ← Draw (Dirichlet(1 2: 3: end for 4: repeat instance ← Draw (CTP( G , w , p )) 5: ( reached , distance ) ← StDFS ( instance , policy ) 6: 7: until reached e − distance � � 8: Observe (1, Bernoulli ) 9: Print ( policy )

  20. Inference Results — CTP Travel Graphs Learned policies: open fraction 1.0 open fraction 0.9 open fraction 0.8 open fraction 0.7 open fraction 0.6 Line widths indicate the frequency of travelling each edge.

  21. Outline Probabilistic Programming Inference Path Finding and Probabilistic Inference Stochastic Policy Learning Case Study: Canadian Traveller Problem Summary

  22. Summary ◮ Discovery of bilateral correspondence between probabilistic inference and policy learning for path finding. ◮ A new approach to policy learning based on the established correspondence. ◮ A realization of the approach for the Canadian traveller problem, where improved policies were consistently learned by probabilistic program inference.

  23. Thank You

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend