proximal policy optimization
play

Proximal Policy Optimization Ruifan Yu (ruifan.yu@uwaterloo.ca) CS - PowerPoint PPT Presentation

Proximal Policy Optimization Ruifan Yu (ruifan.yu@uwaterloo.ca) CS 885 June 20 Pro roximal l Poli licy Optim timization (O (OpenAI) I) PPO has become th the default rein inforcement t le learnin ing alg lgorit ithm at t


  1. Proximal Policy Optimization Ruifan Yu (ruifan.yu@uwaterloo.ca) CS 885 June 20

  2. Pro roximal l Poli licy Optim timization (O (OpenAI) I) ” PPO has become th the default rein inforcement t le learnin ing alg lgorit ithm at t OpenAI I bec ecause of f its its ea ease of f use and good performance ” Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347 . https://arxiv.org/pdf/1707.06347 https://blog.openai.com/openai-baselines-ppo/

  3. Policy Gradient (REINFORCE) In practice, update on each batch(trajectory) * Use the same notation in the paper

  4. Problem? • Uns nstable le up update Step size is very important: • If step size is too large: • Large step  bad policy • Next batch is generated from current bad policy  collect bad samples • Bad samples  worse policy • (compare to supervised learning: the correct label and data in the following batches may correct it) If step size is too small: the learning process is slow • • Data ata Ine Ineff fficiency On-policy method: for each new policy, we need to generate a completely new trajectory • The data is thrown out after just one gradient update • As complex neural networks need many updates, this makes the training process very slow •

  5. Importance Sampling Estimate one distribution by sampling from another distribution 𝑂 𝐹 𝑦~𝑞 [𝑔 𝑦 ] ≈ 1 𝑔 𝑦 𝑗 𝑂 ෍ 𝑗=1,𝑦 𝑗 ∈𝑞 𝐹 𝑦~𝑞 [𝑔 𝑦 ] = න 𝑔 𝑦 𝑞 𝑦 𝑒𝑦 = න 𝑔 𝑦 𝑞 𝑦 𝑟 𝑦 𝑟 𝑦 𝑒𝑦 = 𝐹 𝑦~𝑟 [𝑔 𝑦 𝑞 𝑦 𝑟 𝑦 ] 𝑔 𝑦 𝑗 𝑞 𝑦 𝑗 𝑂 ≈ 1 𝑂 ෍ 𝑟 𝑦 𝑗 𝑗=1,𝑦 𝑗 ∈𝑟

  6. Data Inefficiency Evaluate the gradient Use previous of current policy samples? Like replay buffer in DQN Data Make it Inefficiency efficient Avoid sampling from current policy Can we estimate an expectation of one distribution without taking samples from it?

  7. Importance Sampling in Policy Gradient = 𝐹 𝑦~𝑟 [𝑔 𝑦 𝑞 𝑦 𝐹 𝑦~𝑞 𝑔 𝑦 𝑟 𝑦 ] 𝛼𝐾 𝜄 = 𝐹 𝑡 𝑢 , 𝑏 𝑢 ~ 𝜌 𝜄 [𝛼 log 𝜌 𝜄 𝑏 𝑢 𝑡 𝑢 𝐵(𝑡 𝑢 , 𝑏 𝑢 )] = 𝐹 𝑡 𝑢 , 𝑏 𝑢 ~𝜌 𝜄𝑝𝑚𝑒 [ 𝜌 𝜄 (𝑡 𝑢 , 𝑏 𝑢 ) 𝜌 𝜄 𝑝𝑚𝑒 (𝑡 𝑢 , 𝑏 𝑢 ) 𝛼 log 𝜌 𝜄 𝑏 𝑢 𝑡 𝑢 𝐵(𝑡 𝑢 , 𝑏 𝑢 )] 𝐾 𝜄 = 𝐹 𝑡 𝑢 , 𝑏 𝑢 ~𝜌 𝜄𝑝𝑚𝑒 [ 𝜌 𝜄 (𝑡 𝑢 , 𝑏 𝑢 ) 𝜌 𝜄 𝑝𝑚𝑒 (𝑡 𝑢 , 𝑏 𝑢 ) 𝐵(𝑡 𝑢 , 𝑏 𝑢 )] Surrogate objective function

  8. Importance Sampling Problem? No free lunch! Two expectations are same, but we are using sampling method to estimate them  variance is also important 𝐹 𝑦~𝑟 [𝑔 𝑦 𝑞 𝑦 𝑊𝐵𝑆 𝑌 = 𝐹 𝑌 2 − 𝐹 𝑌 2 𝐹 𝑦~𝑞 [𝑔 𝑦 ] = 𝑟 𝑦 ] 𝑦~𝑟 [𝑔 𝑦 𝑞 𝑦 V𝑏𝑠 𝑟 𝑦 ] V𝑏𝑠 𝑦~𝑞 𝑔 𝑦 2 2 𝑔 𝑦 𝑞 𝑦 − 𝐹 𝑦~𝑟 𝑔 𝑦 𝑞 𝑦 = 𝐹 𝑦~𝑟 𝑟 𝑦 𝑟 𝑦 2 = 𝐹 𝑦~𝑞 𝑔 𝑦 2 − 𝐹 𝑦~𝑞 𝑔 𝑦 = 𝐹 𝑦~𝑞 𝑔 𝑦 2 𝑞 𝑦 2 − 𝐹 𝑦~𝑞 𝑔 𝑦 𝑟 𝑦 𝑞 𝑦 Price (Tradeoff): we may need to sample more data, if is far away from 1 𝑟 𝑦

  9. Unstable Update Unstable Stable Adaptive learning rate Make confident updates limit the policy update range Can we measure the distance between two distributions?

  10. KL Divergence Measure the distance of two distributions 𝑄(𝑦) 𝐸 𝐿𝑀 (𝑄||𝑅) = σ 𝑦 𝑄 𝑦 𝑚𝑝𝑕 𝑅(𝑦) KL divergence of two policies 𝜌 1 (𝑏|𝑡) 𝐸 𝐿𝑀 (𝜌 1 ||𝜌 2 )[𝑡] = σ 𝑏∈𝐵 𝜌 1 𝑏|𝑡 𝑚𝑝𝑕 𝜌 2 (𝑏|𝑡) * image: Kullback – Leibler divergence (Wikipedia) https://en.wikipedia.org/wiki/Kullback – Leibler_divergence

  11. Trust Region Policy Optimization (TRPO) Common trick in optimization: Lagrangian Dual TRPO uses a hard constraint rather than a penalty because it is hard to choose a single value of β that performs well across different problems — or even within a single problem, where the characteristics change over the course of learning

  12. Proximal Policy Optimization (PPO) TRPO use conjugate gradient decent to handle the constraint Hessian Matrix  expensive both in computation and space Idea: The constraint helps in the training process. However, maybe the constraint is not a strict constraint: Does it matter if we only break the constraint just a few times? What if we treat it as a “soft” constraint? Add proximal value to objective function?

  13. PPO with Adaptive KL Penalty Hard to pick 𝛾 value  use adaptive 𝛾 Still need to set up a KL divergence target value …

  14. PPO with Adaptive KL Penalty * CS294 Fall 2017, Lecture 13 http://rail.eecs.berkeley.edu/deeprlcourse-fa17/f17docs/lecture_13_advanced_pg.pdf

  15. PPO with Clipped Objective Fluctuation happens when r changes too quickly  limit r within a range? 1 + Ɛ 1 + Ɛ 1 1 1 - Ɛ 1 - Ɛ 1 1 + Ɛ r 1 1 + Ɛ r 1 - Ɛ 1 - Ɛ

  16. PPO with Clipped Objective * CS294 Fall 2017, Lecture 13 http://rail.eecs.berkeley.edu/deeprlcourse-fa17/f17docs/lecture_13_advanced_pg.pdf

  17. PPO in practice a squared-error loss entropy bonus to ensure Surrogate objective function for “critic” sufficient exploration encourage “diversity” * c1, c2: empirical values, in the paper, c1=1, c2=0.01

  18. Performance Results from continuous control benchmark. Average normalized scores (over 21 runs of the algorithm, on 7 environments)

  19. Performance Results in MuJoCo environments, training for one million timesteps

  20. Related Works [1] Emergence of f Lo Loco comotion Be Behaviours in in Ric ich Env nviro ronments Distributed PPO Interesting fact: this paper is published before PPO paper DeepMind got this idea from OpenAI’s talking in NIPS 2016 [2] An Adaptive Clip lipping Approach fo for r Pro Proximal l Po Policy Opti timization PPO- 𝜇 Change the clipping range adaptively [1] https://arxiv.org/abs/1707.02286 [2] https://arxiv.org/abs/1804.06461

  21. END Thank you

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend