rl overview of topics about reinforcement learning the
play

RL Overview of topics About Reinforcement Learning The - PowerPoint PPT Presentation

Introduction to Reinforcement Learning RL Overview of topics About Reinforcement Learning The Reinforcement Learning Problem Inside an RL agent Temporal difference learning Many faces of Reinforcement Learning What is


  1. Introduction to Reinforcement Learning RL

  2. Overview of topics • About Reinforcement Learning • The Reinforcement Learning Problem • Inside an RL agent • Temporal difference learning

  3. Many faces of Reinforcement Learning

  4. What is Reinforcement Learning? • Learning from interaction • Goal-oriented learning • Learning about, from, and while interacting with an external environment • Learning what to do—how to map situations to actions—so as to maximize a numerical reward signal Reinforcement Learning 4

  5. Branches of AI Supervised Unsupervised Learning Learning Machine Learning Reinforcement Learning

  6. Supervised Learning Training Info = desired (target) outputs Supervised Learning Inputs Outputs System Error = (target output – actual output) Reinforcement Learning 6

  7. Reinforcement Learning Training Info = evaluations (“rewards” / “penalties”) RL Inputs Outputs (“actions”) System Objective: get as much reward as possible Reinforcement Learning 7

  8. Recipe for creative behavior: explore & exploit • Creativity: finding a new approach / solution / … – Exploration (random / systematic / … ) – Evaluation (utility = expected rewards) – Selection (ongoing behavior and learning)

  9. Coli bacteria and creativity • Escherichia Coli searches for food using trial and error: – Choose a random direction by tumbling and then start swimming straight – Evaluate progress – Continue longer or cancel earlier depending on progress http://biology.about.com/library/weekly/aa081299.htm

  10. Zebra finch: from singing in the shower to performing artist 1. A newborn zebra finch can ’ t sing 2. The baby bird listens to father’s song 3. The baby starts to “ babble ” father’s song as a target template 4. The song develops through trial and error – “ singing in the shower ” 5. No exploration when singing to a female http://www.brain.riken.jp/bsi-news/bsinews34/no34/ speciale.html

  11. Zebra finch: from singing in the shower to performing artist • http://www.youtube.com/watch?v=Md6bsvkauPg

  12. Key Features of RL • Learner is not told which actions to take • Trial-and-Error search • Possibility of delayed reward (sacrifice short- term gains for greater long-term gains) • The need to explore and exploit • Considers the whole problem of a goal- directed agent interacting with an uncertain environment Reinforcement Learning 12

  13. Complete Agent Temporally situated Continual learning and planning Object is to affect the environment Environment is stochastic and uncertain Environment state action Agent reward Reinforcement Learning 13

  14. Elements of RL Policy Reward Value Model of environment • Policy : what to do • Reward : what is good • Value : what is good because it predicts reward • Model : what follows what Reinforcement Learning 14

  15. An Extended Example: Tic-Tac-Toe X X X X X O X X O X X X X X O X O O O X O X O O O O X O } x’s move ... x x x } o’s move ... ... ... x o o o x x } x’s move ... ... ... ... ... } o’s move Assume an imperfect opponent: he/ she sometimes makes mistakes } x’s move x o x x o Reinforcement Learning 15

  16. An RL Approach to Tic-Tac-Toe 1. Make a table with one entry per state: State V ( s ) – estimated probability of winning .5 ? x 2. Now play lots of games. To .5 ? . . pick our moves, look ahead . . . . x x x 1 win o one step: o . . . . . . current state x o 0 loss o x o . . various possible . . . . o o next states x * 0 draw x x o x o o Just pick the next state with the highest estimated prob. of winning — the largest V ( s ); a greedy move. But 10% of the time pick a move at random; an exploratory move . Reinforcement Learning 16

  17. RL Learning Rule for Tic-Tac-Toe “Exploratory” move s – the state before our greedy move s – the state after our greedy move ʹ″ We increment each V ( s ) toward V ( s ) – a : ʹ″ backup [ ] V ( s ) V ( s ) V ( s ) V ( s ) ʹ″ ← + α − a small positive fraction, e.g., . 1 α = the step - size parameter Reinforcement Learning 17

  18. How can we improve this T.T.T. player? • Take advantage of symmetries – representation/generalization • Do we need “random” moves? Why? – Do we always need a full 10%? • Can we learn from “random” moves? • … Reinforcement Learning 18

  19. Temporal difference learning • Solution to temporal credit assignment problem • Replace the reward signal by the change in expected future reward – Prediction moves the rewards from the future as close to the actions as possible – Primary reward such as sugar replaced with secondary (or higher order) rewards such as money – In the brain, dopamine ≈ temporal difference signal – Supervised learning is used for channelling information in predictive stimuli to learning

  20. Reinforcement learning example Arrows indicate strength between Start S 2 two problem states Start maze … S 4 S 3 S 8 S 7 S 5 Goal

  21. The first response Start leads to S2 … S 2 The next state is chosen by randomly sampling from the possible next states S 4 S 3 weighted by their associative strength Associative strength = line width S 8 S 7 S 5 Goal

  22. Suppose the Start S 2 randomly sampled response leads to S3 … S 4 S 3 S 8 S 7 S 5 Goal

  23. At S3, choices lead to Start either S2, S4, or S7. S 2 S7 was picked (randomly) S 4 S 3 S 8 S 7 S 5 Goal

  24. By chance, S3 was Start picked next … S 2 S 4 S 3 S 8 S 7 S 5 Goal

  25. Next response is S4 Start S 2 S 4 S 3 S 8 S 7 S 5 Goal

  26. And S5 was chosen Start next (randomly) S 2 S 4 S 3 S 8 S 7 S 5 Goal

  27. And the goal is Start reached … S 2 S 4 S 3 S 8 S 7 S 5 Goal

  28. Goal is reached, Start strengthen the S 2 associative connection between goal state and last response S 4 S 3 Next time S5 is reached, part of the associative strength is passed back to S4... S 8 S 7 S 5 Goal

  29. Start maze again … Start S 2 S 4 S 3 S 8 S 7 S 5 Goal

  30. Let ’ s suppose after Start a couple of moves, S 2 we end up at S5 again S 4 S 3 S 8 S 7 S 5 Goal

  31. S5 is likely to lead to Start GOAL through S 2 strenghtened route In reinforcement learning, strength is also passed back to S 4 S 3 the last state This paves the way for the next time going through maze S 8 S 7 S 5 Goal

  32. The situation after Start lots of restarts … S 2 S 4 S 3 S 8 S 7 S 5 Goal

  33. Stanford autonomous helicopter • https://www.youtube.com/watch?v=VCdxqn0fcnE

  34. RL applications in robotics • Robot Learns to Flip Pancakes • Autonomous spider learns to walk forward by reinforcement learning • Reinforcement learning for a robitic soccer goalkeeper

  35. Conclusion • The Reinforcement Learning Problem • Inside an RL agent – Policy – Reward – Value – Model • Temporal difference learning

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend