garbage in reward out bootstrapping exploration in multi
play

Garbage In, Reward Out: Bootstrapping Exploration in Multi-Armed - PowerPoint PPT Presentation

Garbage In, Reward Out: Bootstrapping Exploration in Multi-Armed Bandits Branislav Kveton, Google Research Csaba Szepesvri, DeepMind and University of Alberta Sharan Vaswani, Mila, University of Montreal Zheng Wen, Adobe Research Mohammad


  1. Garbage In, Reward Out: Bootstrapping Exploration in Multi-Armed Bandits Branislav Kveton, Google Research Csaba Szepesvári, DeepMind and University of Alberta Sharan Vaswani, Mila, University of Montreal Zheng Wen, Adobe Research Mohammad Ghavamzadeh, Facebook AI Research Tor Lattimore, DeepMind

  2. Stochastic Multi-Armed Bandit ● Learning agent sequentially pulls K arms in n rounds … Arm 1 Arm 2 Arm K ● The agent pulls arm I t in round t ∈ [n] and observes its reward ● Reward of arm i is in [0, 1] and drawn i.i.d. from a distribution with mean μ i ● Goal: Maximize the expected n-round reward ● Challenge: Exploration-exploitation trade-off

  3. Thompson Sampling (Thompson, 1933) ● Sample μ i,t from posterior distribution P i,t and pull arm I t = argmax i μ i,t μ 2 μ 1 P 1,t P 2,t Expected reward

  4. Thompson Sampling (Thompson, 1933) ● Sample μ i,t from posterior distribution P i,t and pull arm I t = argmax i μ i,t μ 2 μ 1 P 1,t P 2,t Expected reward ● Key properties ○ P i,t concentrates at μ i with the number of pulls

  5. Thompson Sampling (Thompson, 1933) ● Sample μ i,t from posterior distribution P i,t and pull arm I t = argmax i μ i,t μ 2 μ 1 P 1,t P 2,t Expected reward ● Key properties ○ P i,t concentrates at μ i with the number of pulls ○ μ i,t overestimates μ i with a sufficient probability

  6. Thompson Sampling (Thompson, 1933) ● Sample μ i,t from posterior distribution P i,t and pull arm I t = argmax i μ i,t μ 2 μ 1 Bernoulli bandit P 1,t P 2,t P i,t = beta Gaussian bandit P i,t = normal Expected reward ● Key properties Neural network P i,t = ??? ○ P i,t concentrates at μ i with the number of pulls ○ μ i,t overestimates μ i with a sufficient probability

  7. General Randomized Exploration ● Sample μ i,t from posterior distribution P i,t and pull arm I t = argmax i μ i,t μ 2 μ 1 P 1,t P 2,t How do we design distribution P i,t ? Expected reward ● Key properties ○ P i,t concentrates at (scaled and shifted) μ i with the number of pulls ○ μ i,t overestimates (scaled and shifted) μ i with a sufficient probability

  8. Giro (Garbage In, Reward Out) with [0, 1] Rewards ● μ i,t is the mean of a non-parametric bootstrap sample of the history of arm i with pseudo-rewards (garbage)

  9. Giro (Garbage In, Reward Out) with [0, 1] Rewards ● μ i,t is the mean of a non-parametric bootstrap sample of the history of arm i with pseudo-rewards (garbage) History Arm 1 0 0 Arm 2 1 0 1

  10. Giro (Garbage In, Reward Out) with [0, 1] Rewards ● μ i,t is the mean of a non-parametric bootstrap sample of the history of arm i with pseudo-rewards (garbage) History Garbage Arm 1 0 0 0 0 1 1 Arm 2 1 0 1 0 0 0 1 1 1

  11. Giro (Garbage In, Reward Out) with [0, 1] Rewards ● μ i,t is the mean of a non-parametric bootstrap sample of the history of arm i with pseudo-rewards (garbage) History Garbage Bootstrap sample μ i,t Arm 1 2 / 3 0 0 0 0 1 1 1 1 1 1 0 0 Arm 2 5 / 9 1 0 1 0 0 0 1 1 1 1 1 0 1 1 1 0 0 0

  12. Giro (Garbage In, Reward Out) with [0, 1] Rewards ● μ i,t is the mean of a non-parametric bootstrap sample of the history of arm i with pseudo-rewards (garbage) History Garbage Bootstrap sample μ i,t Arm 1 2 / 3 0 0 0 0 1 1 1 1 1 1 0 0 Arm 2 5 / 9 1 0 1 0 0 0 1 1 1 1 1 0 1 1 1 0 0 0 ● Benefits and challenges of randomized garbage ○ μ i,t overestimates scaled and shifted μ i with a sufficient probability ○ Bias in the estimate of μ i

  13. Contextual Giro with [0, 1] Rewards ● Straightforward generalization to complex structured problems ● μ i,t is the estimated reward of arm i in a model trained on a non-parametric bootstrap sample of the history with pseudo-rewards (garbage) History Garbage Bootstrap sample μ i,t (x 1 , ) 0 (x 1 , ) 0 (x 1 , ) 1 (x 1 , ) 0 (x 1 , ) 1 (x 2 , ) 0 Estimate from a (x 2 , ) 1 (x 2 , ) 0 (x 2 , ) 1 (x 2 , ) 1 (x 2 , ) 1 (x 2 , ) 1 learned model (x 3 , ) 0 (x 3 , ) 0 (x 3 , ) 1 (x 3 , ) 0 (x 3 , ) 1 (x 3 , ) 1 ● Giro is as general as the ε-greedy policy... but no tuning!

  14. How to do bandits with neural networks easily? How does Giro compare to Thompson sampling? See you at poster #125!

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend