Transfer of Samples in Policy Search via Multiple Importance - - PowerPoint PPT Presentation

transfer of samples in policy search via multiple
SMART_READER_LITE
LIVE PREVIEW

Transfer of Samples in Policy Search via Multiple Importance - - PowerPoint PPT Presentation

Transfer of Samples in Policy Search via Multiple Importance Sampling Andrea Tirinzoni, Mattia Salvini, and Marcello Restelli 36th International Conference on Machine Learning, Long Beach, California 1 Motivation Policy Search (PS) : very


slide-1
SLIDE 1

Transfer of Samples in Policy Search via Multiple Importance Sampling

Andrea Tirinzoni, Mattia Salvini, and Marcello Restelli

36th International Conference on Machine Learning, Long Beach, California

slide-2
SLIDE 2

Motivation

1

Policy Search (PS): very effective RL technique for continuous control tasks

[Heess et al., 2017] [OpenAI, 2018] [Vinyals et al., 2017]

High sample complexity remains a major limitation

Tirinzoni et al. Transfer of Samples in Policy Search via Multiple Importance Sampling ICML 2019

slide-3
SLIDE 3

Motivation

1

Policy Search (PS): very effective RL technique for continuous control tasks

[Heess et al., 2017] [OpenAI, 2018] [Vinyals et al., 2017]

High sample complexity remains a major limitation Samples available from several sources are discarded

Different policies Different environments

Tirinzoni et al. Transfer of Samples in Policy Search via Multiple Importance Sampling ICML 2019

slide-4
SLIDE 4

Motivation

1

Policy Search (PS): very effective RL technique for continuous control tasks

[Heess et al., 2017] [OpenAI, 2018] [Vinyals et al., 2017]

High sample complexity remains a major limitation Samples available from several sources are discarded

Different policies Different environments           

Transfer of Samples

Tirinzoni et al. Transfer of Samples in Policy Search via Multiple Importance Sampling ICML 2019

slide-5
SLIDE 5

Transfer of Samples

2

Source Task M1 Source Task M2 Source Task Mm Target Task M πθ, P τi,1 ∼ πθ1, P1 τi,2 ∼ πθ2, P2 τi,m ∼ πθm, Pm

Tirinzoni et al. Transfer of Samples in Policy Search via Multiple Importance Sampling ICML 2019

slide-6
SLIDE 6

Transfer of Samples

2

Source Task M1 Source Task M2 Source Task Mm Target Task M πθ, P τi,1 ∼ πθ1, P1 τi,2 ∼ πθ2, P2 τi,m ∼ πθm, Pm

Existing works: batch value-based settings [Lazaric et al., 2008, Taylor et al., 2008,

Lazaric and Restelli, 2011, Laroche and Barlier, 2017, Tirinzoni et al., 2018]

Extension to online PS algorithms not trivial

Tirinzoni et al. Transfer of Samples in Policy Search via Multiple Importance Sampling ICML 2019

slide-7
SLIDE 7

Transferring Samples in Policy Search

3

Goal: Transfer source trajectories to improve the target gradient estimation

Multiple Importance Sampling (MIS) Gradient Estimator

∇MIS

θ

J(θ) := 1 n

m

  • j=1

nj

  • i=1

w(τi,j)

weights

gθ(τi,j)

gradient

w(τ) := p(τ|θ, P) m

j=1 αjp(τ|θj, Pj)

Tirinzoni et al. Transfer of Samples in Policy Search via Multiple Importance Sampling ICML 2019

slide-8
SLIDE 8

Transferring Samples in Policy Search

3

Goal: Transfer source trajectories to improve the target gradient estimation

Multiple Importance Sampling (MIS) Gradient Estimator

∇MIS

θ

J(θ) := 1 n

m

  • j=1

nj

  • i=1

w(τi,j)

weights

gθ(τi,j)

gradient

w(τ) := p(τ|θ, P) m

j=1 αjp(τ|θj, Pj)

Unbiased and bounded weights

Tirinzoni et al. Transfer of Samples in Policy Search via Multiple Importance Sampling ICML 2019

slide-9
SLIDE 9

Transferring Samples in Policy Search

3

Goal: Transfer source trajectories to improve the target gradient estimation

Multiple Importance Sampling (MIS) Gradient Estimator

∇MIS

θ

J(θ) := 1 n

m

  • j=1

nj

  • i=1

w(τi,j)

weights

gθ(τi,j)

gradient

w(τ) := p(τ|θ, P) m

j=1 αjp(τ|θj, Pj)

Unbiased and bounded weights Easily combined with other variance reduction techniques

Tirinzoni et al. Transfer of Samples in Policy Search via Multiple Importance Sampling ICML 2019

slide-10
SLIDE 10

Transferring Samples in Policy Search

3

Goal: Transfer source trajectories to improve the target gradient estimation

Multiple Importance Sampling (MIS) Gradient Estimator

∇MIS

θ

J(θ) := 1 n

m

  • j=1

nj

  • i=1

w(τi,j)

weights

gθ(τi,j)

gradient

w(τ) := p(τ|θ, P) m

j=1 αjp(τ|θj, Pj)

Unbiased and bounded weights Easily combined with other variance reduction techniques Effective sample size ≡ Transferable knowledge → Adaptive batch size

Tirinzoni et al. Transfer of Samples in Policy Search via Multiple Importance Sampling ICML 2019

slide-11
SLIDE 11

Transferring Samples in Policy Search

3

Goal: Transfer source trajectories to improve the target gradient estimation

Multiple Importance Sampling (MIS) Gradient Estimator

∇MIS

θ

J(θ) := 1 n

m

  • j=1

nj

  • i=1

w(τi,j)

weights

gθ(τi,j)

gradient

w(τ) := p(τ|θ, P) m

j=1 αjp(τ|θj, Pj)

Unbiased and bounded weights Easily combined with other variance reduction techniques Effective sample size ≡ Transferable knowledge → Adaptive batch size Provably robust to negative transfer

Tirinzoni et al. Transfer of Samples in Policy Search via Multiple Importance Sampling ICML 2019

slide-12
SLIDE 12

Estimating the Transition Models

4

Problem: P unknown → Importance weights cannot be computed Solution: Online minimization of an upper-bound to the expected MSE of ∇MIS

θ

J(θ)

Tirinzoni et al. Transfer of Samples in Policy Search via Multiple Importance Sampling ICML 2019

slide-13
SLIDE 13

Estimating the Transition Models

4

Problem: P unknown → Importance weights cannot be computed Solution: Online minimization of an upper-bound to the expected MSE of ∇MIS

θ

J(θ) Obtain principled estimates even without target samples

Tirinzoni et al. Transfer of Samples in Policy Search via Multiple Importance Sampling ICML 2019

slide-14
SLIDE 14

Estimating the Transition Models

4

Problem: P unknown → Importance weights cannot be computed Solution: Online minimization of an upper-bound to the expected MSE of ∇MIS

θ

J(θ) Obtain principled estimates even without target samples Can be efficiently optimized for

Discrete set of models Reproducing Kernel Hilbert Spaces (RKHS) → Closed-form solution

Tirinzoni et al. Transfer of Samples in Policy Search via Multiple Importance Sampling ICML 2019

slide-15
SLIDE 15

Empirical Results

5

50 100 150 200 250 50 100 150 200 Episodes Expected Return

Cartpole

200 400 600 −20 −15 −10 −5 Episodes

Minigolf

No Transfer Sample reuse Known Models Unknown models

Good performance with both known and unknown models Very effective sample reuse from different policies but same environment

Tirinzoni et al. Transfer of Samples in Policy Search via Multiple Importance Sampling ICML 2019

slide-16
SLIDE 16

Thank you!

6

andrea.tirinzoni@polimi.it https://github.com/AndreaTirinzoni/ Meet us at poster #118 @ Pacific Ballroom

Tirinzoni et al. Transfer of Samples in Policy Search via Multiple Importance Sampling ICML 2019

slide-17
SLIDE 17

References

7

Hammersley, J. and Handscomb, D. (1964). Monte Carlo Methods. Methuen’s monographs on applied probability and statistics. Methuen. Heess, N., Sriram, S., Lemmon, J., Merel, J., Wayne, G., Tassa, Y., Erez, T., Wang, Z., Eslami, A., Riedmiller, M., et al. (2017). Emergence of locomotion behaviours in rich environments. arXiv preprint arXiv:1707.02286. Laroche, R. and Barlier, M. (2017). Transfer reinforcement learning with shared dynamics. In AAAI. Lazaric, A. and Restelli, M. (2011). Transfer from multiple mdps. In Advances in Neural Information Processing Systems. Lazaric, A., Restelli, M., and Bonarini, A. (2008). Transfer of samples in batch reinforcement learning. In Proceedings of the 25th international conference on Machine learning.

Tirinzoni et al. Transfer of Samples in Policy Search via Multiple Importance Sampling ICML 2019

slide-18
SLIDE 18

References (cont.)

8

OpenAI (2018). Learning dexterous in-hand manipulation. CoRR, abs/1808.00177. Precup, D. (2000). Eligibility traces for off-policy policy evaluation. Computer Science Department Faculty Publication Series, page 80. Taylor, M. E., Jong, N. K., and Stone, P. (2008). Transferring instances for model-based reinforcement learning. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 488–505. Springer. Tirinzoni, A., Sessa, A., Pirotta, M., and Restelli, M. (2018). Importance weighted transfer of samples in reinforcement learning. In International Conference on Machine Learning, pages 4943–4952.

Tirinzoni et al. Transfer of Samples in Policy Search via Multiple Importance Sampling ICML 2019

slide-19
SLIDE 19

References (cont.)

9

Vinyals, O., Ewalds, T., Bartunov, S., Georgiev, P., Vezhnevets, A. S., Yeo, M., Makhzani, A., K¨ uttler, H., Agapiou, J., Schrittwieser, J., et al. (2017). Starcraft ii: A new challenge for reinforcement learning. arXiv preprint arXiv:1708.04782.

Tirinzoni et al. Transfer of Samples in Policy Search via Multiple Importance Sampling ICML 2019