learning a prior over intent via meta inverse
play

Learning a Prior over Intent via Meta-Inverse Reinforcement Learning - PowerPoint PPT Presentation

Learning a Prior over Intent via Meta-Inverse Reinforcement Learning Kelvin Xu, Ellis Ratner, Anca Dragan, Sergey Levine, Chelsea Finn University of California, Berkeley Motivation : a well specified reward function remains an important


  1. Learning a Prior over Intent via Meta-Inverse Reinforcement Learning Kelvin Xu, Ellis Ratner, Anca Dragan, Sergey Levine, Chelsea Finn University of California, Berkeley

  2. Motivation : a well specified reward function remains an important assumption for applying RL in practice MANDRIL Meta Reward and Intention Learning

  3. Motivation : a well specified reward function remains an important assumption for applying RL in practice Simulation MANDRIL Meta Reward and Intention Learning

  4. Motivation : a well specified reward function remains an important assumption for applying RL in practice Simulation Real World MANDRIL Meta Reward and Intention Learning

  5. Motivation : a well specified reward function remains an important assumption for applying RL in practice Simulation Real World Often easier to provide expert data and learn a reward function using inverse RL MANDRIL Meta Reward and Intention Learning

  6. Motivation : a well specified reward function remains an important assumption for applying RL in practice Simulation Real World Often easier to provide expert data and learn a reward function using inverse RL Inverse RL frequently requires a lot of data to learn a generalizable reward MANDRIL Meta Reward and Intention Learning

  7. Motivation : a well specified reward function remains an important assumption for applying RL in practice Simulation Real World Often easier to provide expert data and learn a reward function using inverse RL Inverse RL frequently requires a lot of data to learn a generalizable reward This is due in part with the fundamental ambiguity of reward learning MANDRIL Meta Reward and Intention Learning

  8. Goal : how can agents infer rewards from one or a few demonstrations? MANDRIL Meta Reward and Intention Learning

  9. Goal : how can agents infer rewards from one or a few demonstrations? Intuition: demonstrations from previous tasks induce a prior over the space of possible future tasks MANDRIL Meta Reward and Intention Learning

  10. Goal : how can agents infer rewards from one or a few demonstrations? Intuition: demonstrations from previous tasks induce a prior over the space of possible future tasks MANDRIL Meta Reward and Intention Learning

  11. Goal : how can agents infer rewards from one or a few demonstrations? Intuition: demonstrations from previous tasks induce a prior over the space of possible future tasks MANDRIL Meta Reward and Intention Learning

  12. Goal : how can agents infer rewards from one or a few demonstrations? Intuition: demonstrations from previous tasks induce a prior over the space of possible future tasks Shared Context → E ffi cient adaptation MANDRIL Meta Reward and Intention Learning

  13. Meta-inverse reinforcement learning : using prior tasks information to accelerate inverse-RL MANDRIL Meta Reward and Intention Learning

  14. Meta-inverse reinforcement learning : using prior tasks information to accelerate inverse-RL MANDRIL Meta Reward and Intention Learning

  15. Meta-inverse reinforcement learning : using prior tasks information to accelerate inverse-RL MANDRIL Meta Reward and Intention Learning

  16. Our instantiation : (background) Model-agnostic meta-learning MANDRIL Meta Reward and Intention Learning

  17. Our instantiation : (background) Model-agnostic meta-learning MANDRIL Meta Reward and Intention Learning

  18. Our instantiation : (background) Model-agnostic meta-learning MANDRIL Meta Reward and Intention Learning

  19. Our instantiation : (background) Model-agnostic meta-learning MANDRIL Meta Reward and Intention Learning

  20. Our approach: Meta reward and intention learning MANDRIL Meta Reward and Intention Learning

  21. Our approach: Meta reward and intention learning MANDRIL Meta Reward and Intention Learning

  22. Our approach: Meta reward and intention learning MANDRIL Meta Reward and Intention Learning

  23. Domain 1 : SpriteWorld environment Meta- Training Evaluation time MANDRIL Meta Reward and Intention Learning

  24. Domain 1 : SpriteWorld environment Meta- Training Evaluation time Each task is a specific landmark navigation task MANDRIL Meta Reward and Intention Learning

  25. Domain 1 : SpriteWorld environment Meta- Training Evaluation time Each task is a specific landmark navigation task Each task exhibits the same terrain preferences MANDRIL Meta Reward and Intention Learning

  26. Domain 1 : SpriteWorld environment Meta- Training Evaluation time Each task is a specific landmark navigation task Each task exhibits the same terrain preferences Evaluation time varies the position of landmark and uses unseen sprites MANDRIL Meta Reward and Intention Learning

  27. Domain 2 : First person navigation (SUNCG) MANDRIL Meta Reward and Intention Learning

  28. Domain 2 : First person navigation (SUNCG) Tasks require both learning navigation (NAV) and picking (PICK) MANDRIL Meta Reward and Intention Learning

  29. Domain 2 : First person navigation (SUNCG) Tasks require both learning navigation (NAV) and picking (PICK) Task illustration MANDRIL Meta Reward and Intention Learning

  30. Domain 2 : First person navigation (SUNCG) Tasks require both learning navigation (NAV) and picking (PICK) Task illustration Agent view MANDRIL Meta Reward and Intention Learning

  31. Domain 2 : First person navigation (SUNCG) Tasks require both learning navigation (NAV) and picking (PICK) Task illustration Agent view Tasks share a common theme but di ff er in visual layout and specific goal MANDRIL Meta Reward and Intention Learning

  32. Results : With only a limited number of demonstrations, performance is significantly better MANDRIL Meta Reward and Intention Learning

  33. Results : With only a limited number of demonstrations, performance is significantly better MANDRIL Meta Reward and Intention Learning

  34. Results : With only a limited number of demonstrations, performance is significantly better MANDRIL Meta Reward and Intention Learning

  35. Results : With only a limited number of demonstrations, performance is significantly better MANDRIL Meta Reward and Intention Learning

  36. Results : Optimizing initial weights consistently improves performance across tasks Success rate is significantly improved on both test and unseen house layouts especially on the harder PICK task MANDRIL Meta Reward and Intention Learning

  37. Reward function can be adapted with a limited number of demonstrations MANDRIL Meta Reward and Intention Learning

  38. Reward function can be adapted with a limited number of demonstrations MANDRIL Meta Reward and Intention Learning

  39. Reward function can be adapted with a limited number of demonstrations MANDRIL Meta Reward and Intention Learning

  40. Reward function can be adapted with a limited number of demonstrations MANDRIL Meta Reward and Intention Learning

  41. Reward function can be adapted with a limited number of demonstrations MANDRIL Meta Reward and Intention Learning

  42. Thanks! Tuesday, Poster #222 Anca Dragan Sergey Levine Chelsea Finn Kelvin Xu Ellis Ratner

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend