what s wrong with meta learning
play

Whats Wrong with Meta -Learning (and how we might fix it) Sergey - PowerPoint PPT Presentation

Whats Wrong with Meta -Learning (and how we might fix it) Sergey Levine UC Berkeley Google Brain Yahya, Li, Kalakrishnan, Chebotar , Levine, 16 Kalashnikov, Irpan, Pastor, Ibarz, Herzong, Jang, Quillen, Holly, Kalakrishnan, Vanhoucke,


  1. What’s Wrong with Meta -Learning (and how we might fix it) Sergey Levine UC Berkeley Google Brain

  2. Yahya, Li, Kalakrishnan, Chebotar , Levine, ‘16

  3. Kalashnikov, Irpan, Pastor, Ibarz, Herzong, Jang, Quillen, Holly, Kalakrishnan, Vanhoucke, Levine. QT-Opt: Scalable Deep Reinforcement Learning of Vision-Based Robotic Manipulation Skills

  4. Kalashnikov, Irpan, Pastor, Ibarz, Herzong, Jang, Quillen, Holly, Kalakrishnan, Vanhoucke, Levine. QT-Opt: Scalable Deep Reinforcement Learning of Vision-Based Robotic Manipulation Skills

  5. about four hours about four weeks, nonstop people can learn new skills can we transfer past extremely quickly experience in order to how? learn how to learn? we never learn from scratch!

  6. The meta-learning/few-shot learning problem A simpler, model-agnostic , meta-learning method Unsupervised meta-learning

  7. The meta-learning/few-shot learning problem A simpler, model-agnostic , meta-learning method Unsupervised meta-learning

  8. Few-shot learning: problem formulation in pictures image credit: Ravi & Larochelle ‘17

  9. Few-shot learning: problem formulation in equations input (e.g., image) output (e.g., label) training set test label • How to read in training set? • Many options, RNNs can work test input (few shot) training set

  10. Some examples of representations Santoro et al. “Meta -Learning with Memory- Vinyals et al. “Matching Networks for One - Snell et al. “Prototyping Networks for Few - Augmented Neural Networks.” Shot Learning” Shot Learning” …and many many many others!

  11. What kind of algorithm is learned? RNN-based meta-learning test label test input this implements the “learned learning algorithm” • Does it converge? • Kind of? • What does it converge to? • Who knows… • What to do if it’s not good enough? • Nothing…

  12. The meta-learning/few-shot learning problem A simpler, model-agnostic , meta-learning method Unsupervised meta-learning

  13. Let’s step back a bit… is pretraining a type of meta-learning? better features = faster learning of new task!

  14. Model-agnostic meta-learning a general recipe: Chelsea Finn * in general, can take more than one gradient step here ** we often use 4 – 10 steps “meta - loss” for task i Finn et al., “Model -Agnostic Meta- Learning”

  15. What did we just do? Just another computation graph… Can implement with any autodiff package (e.g., TensorFlow)

  16. Why does it work? MAML RNN-based meta-learning test label test input this implements the “learned learning algorithm” • Does it converge? • Does it converge? • Kind of? • Yes (it’s gradient descent…) • What does it converge to? • What does it converge to? • Who knows… • A local optimum (it’s gradient descent…) • What to do if it’s not good enough? • What to do if it’s not good enough? • Nothing… • Keep taking gradient steps (it’s gradient descent…)

  17. Universality Did we lose anything? Universality: meta- learning can learn any “algorithm” Finn & Levine. “Meta - Learning and Universality”

  18. Model-agnostic meta-learning: forward/backward locomotion after 1 gradient step after 1 gradient step after MAML training (forward reward) (backward reward)

  19. Related work Ravi & Larochelle. “Optimization as Andrychowicz et al. “Learning to learn by Maclaurin et al. “Gradient -based Li & Malik. “Learning to optimize” a model for few- shot learning” gradient descent by gradient descent.” hyperparameter optimization” …and many many many others!

  20. Follow-up work MiniImagenet few-shot benchmark: 5-shot 5-way Finn et al. ‘17: 63.11% …and the results keep getting better Li et al. ‘17: 64.03% Kim et al. ‘18 ( AutoMeta): 76.29%

  21. The meta-learning/few-shot learning problem A simpler, model-agnostic , meta-learning method Unsupervised meta-learning

  22. Let’s Talk about Meta -Overfitting • Meta learning requires task distributions • When there are too few meta- training tasks, we can meta- after MAML training after 1 gradient step overfit • Specifying task distributions is hard, especially for meta-RL! • Can we propose tasks automatically ?

  23. A General Recipe for Unsupervised Meta-RL Fast Unsupervised Meta-learned Meta-RL Adaptation Task Acquisition reward-maximizing environment -specific environment policy Unsupervised Meta-RL RL algorithm reward function Ben Eysenbach Abhishek Gupta Chelsea Finn Gupta, Eysenbach, Finn, Levine. Unsupervised Meta-Learning for Reinforcement Learning.

  24. Random Task Proposals ◼ Use randomly initialize discriminators for reward functions D → randomly initialized network ◼ Important: Random functions over state space, not random policies

  25. Diversity-Driven Proposals Environment Policy → visit states which are ◼ Action State Discriminator(D) discriminable Discriminator → predict skill Policy(Agent) ◼ from state Skill (z) Predict Skill Task Reward for UML: Eysenbach, Gupta, Ibarz, Levine. Diversity is All You Need.

  26. Examples of Acquired Tasks Ant Cheetah Eysenbach, Gupta, Ibarz, Levine. Diversity is All You Need.

  27. Does it work? Ant Cheetah 2D Navigation Meta-test performance with rewards Gupta, Eysenbach, Finn, Levine. Unsupervised Meta-Learning for Reinforcement Learning.

  28. What about supervised learning?

  29. Can we meta-train on only unlabeled images? task proposals meta-learning unsupervised learning MAML training test Class 1 images images Class 1 Class 2 Class 2 training test images images Class 1 Kyle Hsu Chelsea Finn Class 2 But... does it outperform unsupervised learning? Hsu, Levine, Finn. Unsupervised Learning via Meta-Learning.

  30. Results: unsupervised meta-learning task proposals meta-learning unsupervised learning a few choices: Clustering to mini ImageNet: 5 shot, 5 way Automatically Construct BiGAN – Donahue et al. ’17 method accuracy Tasks for Unsupervised DeepCluster – Caron et al. ‘18 Meta-Learning (CACTUs) MAML with labels 62.13% BiGAN kNN 31.10% BiGAN logistic 33.91% no true BiGAN MLP + dropout 29.06% labels BiGAN cluster matching 29.49% at all ! BiGAN CACTUs 51.28% DeepCluster CACTUs 53.97% Same story across: • 3 different embedding methods • 4 datasets (Omniglot, miniImageNet, CelebA, MNIST) Hsu, Levine, Finn. Unsupervised Learning via Meta-Learning.

  31. The meta-learning/few-shot learning problem A simpler, model-agnostic , meta-learning method Unsupervised meta-learning

  32. What’s next? Probabilistic meta-learning: learn to sample multiple hypotheses Finn*, Xu*, Levine. Probabilistic Model-Agnostic Meta-Learning. 2018. Meta-learning online learning & continual learning Nagabandi, Finn, Levine. Deep Online Learning via Meta-Learning: Continual Adaptation via Model-Based RL. 2018. Meta-learning to interpret weak supervision Instruction: Move blue triangle to green goal. and natural language Yu*, Finn*, Xie, Dasari, Abbeel, Levine. One-Shot Imitation from Observing Humans via Domain-Adaptive Meta-Learning. 2018. Co-Reyes, Gupta, Sanjeev, Altieri, DeNero, Abbeel, Levine. Meta-Learning Correction 1: Enter Correction 2: Enter the Language-Guided Policy Learning. 2018. the blue room. red room.

  33. RAIL website: http://rail.eecs.berkeley.edu source code: http://rail.eecs.berkeley.edu/code.html Robotic AI & Learning Lab

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend