pilco a model based and data efficient approach to policy
play

PILCO: A Model-Based and Data-Efficient Approach to Policy Search - PowerPoint PPT Presentation

PILCO: A Model-Based and Data-Efficient Approach to Policy Search (M.P. Deisenroth and C.E. Rasmussen) CSC2541 November 4, 2016 PILCO Graphical Model PILCO Probabilistic Inference for Learning COntrol Latent states { X t } evolve through


  1. PILCO: A Model-Based and Data-Efficient Approach to Policy Search (M.P. Deisenroth and C.E. Rasmussen) CSC2541 November 4, 2016

  2. PILCO Graphical Model PILCO – Probabilistic Inference for Learning COntrol Latent states { X t } evolve through time based on previous states and controls Policy π maps Z t , a noisy observation of X t , into a control, U t CSC2541 November 4, 2016 2/ 19

  3. PILCO Objective Transitions follow dynamic system x t = f ( x t − 1 , u t − 1 ) where x ∈ R D , u ∈ R F and f is a latent function. Let π be parameterized by θ and u t = π ( x t , θ ) . The objective is to find π that minimizes expected cost of following π for T steps Cost function encodes information about a target state, e.g., c ( x ) = 1 − exp( −� x − x target � 2 /σ 2 c ) CSC2541 November 4, 2016 3/ 19

  4. Algorithm CSC2541 November 4, 2016 4/ 19

  5. Algorithm CSC2541 November 4, 2016 5/ 19

  6. Dynamics Model Learning Multiple plausible function approximators of f CSC2541 November 4, 2016 6/ 19

  7. Dynamics Model Learning Multiple plausible function approximators of f CSC2541 November 4, 2016 6/ 19

  8. Dynamics Model Learning Define a Gaussian process (GP) prior on the latent dynamic function f CSC2541 November 4, 2016 7/ 19

  9. Dynamics Model Learning x � [ x T u T ] T and the Let the prior of f be GP (0 , k (˜ x, ˜ x ′ )) where ˜ squared exponential kernel is given by CSC2541 November 4, 2016 8/ 19

  10. Dynamics Model Learning Let ∆ t = x t − x t − 1 + ε where ε ∼ N (0 , Σ ε ) and Σ ε = diag([ σ ε 1 , . . . , σ ε D ]) . The GP yields one-step predictions (see Section 2.2 in reference 3) Given n training inputs ˜ X = [˜ x 1 , . . . , ˜ x n ] and corresponding training targets y = [∆ 1 , . . . , ∆ n ] , the posterior GP hyper-parameters are learned by evidence maximization (type 2 maximum likelihood). CSC2541 November 4, 2016 9/ 19

  11. Algorithm CSC2541 November 4, 2016 10/ 19

  12. Policy Evaluation In evaluating objective J π ( θ ) , we must calculate p ( x t ) since We have x t = x t − 1 + ∆ t − ε , where in general, computing p (∆ t ) is analytically intractable. Instead, p (∆ t ) is approximated with a Gaussian via moment matching. CSC2541 November 4, 2016 11/ 19

  13. Moment Matching Input distribution p ( x t − 1 , u t 1 ) is assumed Gaussian When propagated through the GP model, we obtain p (∆ t ) p (∆ t ) is approximated by a Gaussian via moment matching CSC2541 November 4, 2016 12/ 19

  14. Moment Matching p ( x t ) can now be approximated with N ( µ t , Σ t ) where µ ∆ and Σ ∆ are computed exactly via iterated expectation and variance CSC2541 November 4, 2016 13/ 19

  15. Algorithm CSC2541 November 4, 2016 14/ 19

  16. Analytic Gradient for Policy Improvement Let E t = E x t [ c ( x t )] so that J π ( θ ) = � T t =1 E t . E t depends on θ through p ( x t ) , which depends on θ through p ( x t − 1 ) , which depends on θ through µ t and Σ t , . . . , which depends on θ based on µ u and Σ u , where u t = π ( x t , θ ) . Chain rule is used to calculate derivatives Analytic gradients allow for gradient-based non-convex optimization methods, e.g., CG or L-BFGS CSC2541 November 4, 2016 15/ 19

  17. Data-Efficiency CSC2541 November 4, 2016 16/ 19

  18. Advantages and Disadvantages Advantages Data-efficient Incorporates model-uncertainty into long-term planning Does not rely on expert knowledge, i.e., demonstrations, or task-specific prior knowledge. Disadvantages Not an optimal control method. If p ( X i ) do not cover the target region and σ c induces a cost that is very peaked around the target solution, PILCO gets stuck in a local optimum because of zero gradients. Learned dynamics models are only confident in areas of the state space previously observed. Does not take temporal correlation into account. Model uncertainty treated as uncorrelated noise CSC2541 November 4, 2016 17/ 19

  19. Extension: PILCO with Bayesian Filtering R. McAllister and C. Rasmussen, “Data-Efficient Reinforcement Learning in Coninuous-State POMDPs.” https://arxiv.org/abs/1602.02523 CSC2541 November 4, 2016 18/ 19

  20. References 1 M.P. Deisenroth and C.E. Rasmussen, “PILCO: A Model-Based and Data-Efficient Approach to Policy Search” in Proceedings of the 28th International Conference on Machine Learning , Bellevue, WA, USA, 2011. 2 R. McAllister and C. Rasmussen, “Data-Efficient Reinforcement Learning in Coninuous-State POMDPs.” https://arxiv.org/abs/1602.02523 3 C.E. Rasmussen and C.K.I. Williams (2006) Gaussian Processes for Machine Learning . MIT Press. www.gaussianprocess.org/gpml/chapters 4 C.M. Bishop (2006). Pattern Recognition and Machine Learning Chapter 6.4. Springer. ISBN 0-387-31073-8. CSC2541 November 4, 2016 19/ 19

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend