comp 138 reinforcement learning
play

COMP 138: Reinforcement Learning Instructor : Jivko Sinapov Webpage : - PowerPoint PPT Presentation

COMP 138: Reinforcement Learning Instructor : Jivko Sinapov Webpage : https://www.eecs.tufts.edu/~jsinapov/teaching/comp150_RL_Fall2020/ Announcements Reading Assignment Chapter 6 of Sutton and Barto Research Article Topics Transfer


  1. COMP 138: Reinforcement Learning Instructor : Jivko Sinapov Webpage : https://www.eecs.tufts.edu/~jsinapov/teaching/comp150_RL_Fall2020/

  2. Announcements

  3. Reading Assignment ● Chapter 6 of Sutton and Barto

  4. Research Article Topics ● Transfer learning ● Learning with human demonstrations and/or advice ● Approximating q-functions with neural networks

  5. Reading Assignment ● Chapter 6 of Sutton and Barto ● Matthew E. Taylor, Peter Stone, and Yaxin Liu. Transfer Learning via Inter-Task Mappings for Temporal Difference Learning . Journal of Machine Learning Research, 8(1):2125-2167, 2007. ● Responses should discuss both readings ● You get extra credit for answering others’ questions!

  6. Programming Assignment #2 ● Homework 2 is out

  7. Class Project Discussion ● What makes a good project? ● What makes a good team?

  8. Reading Responses “What are some real word applications of DP?” - Boriana “Since there are at least four ways Monte Carlo methods are advantageous over DP mentioned, are there any problems in which using DP is more practical?” - Catherine

  9. Reading Responses “How can we define the stopping conditions for value iteration or the Monte-Carlo method (how many iterations is enough)?” – Tung

  10. Reading Responses “Are DP methods dependent on initial states?” – Eric

  11. Reading Responses “In the Asynchronous Dynamic Programming method, according to what to choose which states should be updated more frequently?” – Pandong

  12. Any other questions about DP?

  13. Dynamic Programming

  14. V 0 V 1 V 2 V 3 V 4 V 5 π = random policy 0 0 0 γ = 0.5 A 0 0 X/12 +X B 0 X/3 A B C C 0 X/6 D D 1/3 * 0.5 * x/12 + 1/3 * 0.5 * x/6 + x/3

  15. V 0 V 1 V 2 V 3 V 4 V 5 π = random policy 0 γ = 0.5 A 0 2nd +X B 0 1st A B C C 0 D D 1/3 * 0.5 * x/12 + 1/3 * 0.5 * x/6 + x/3

  16. V 0 V 1 V 2 V 3 V 4 V 5 π = optimal policy 0 γ = 0.5 A 0 +10 B 0 A B C C 0 D D At each state, the agent has 1 or more actions allowing it to move to neighboring states. Moving in the direction of a wall is not allowed WORKING TEXT AREA:

  17. Policy Improvement ● Main idea: if for a particular state s , we can do better than following the current policy by taking a different action, then the current policy is not optimal and changing it to follow the different action at state s improves it

  18. Policy Iteration ● evaluate → improve → evaluate → improve → …..

  19. Value Iteration ● Main idea: – Do one sweep of policy evaluation under the current greedy policy – Repeat until values stop changing (relative to some small Δ)

  20. V 0 V 1 V 2 V 3 V 4 V 5 π = greedy policy 0 γ = 0.5 A 0 +10 B 0 A B C C 0 D D At each state, the agent has 1 or more actions allowing it to move to neighboring states. Moving in the direction of a wall is not allowed WORKING TEXT AREA:

  21. Monte Carlo Methods

  22. Code Demo

  23. Reading Responses “- What is the advantage and disadvantages of model-free method? What is the advantage and disadvantages of model-based method?” – Tung

  24. Reading Responses “In theory, both DP and Monte Carlo will find optimal policy, but since our implementation of the method won't iterate infinitely, will there be chances that the result is only local optimal value” – Erli

  25. Reading Responses “Are there situations when on-policy methods are preferred over off-policy for reasons other than ease of implementation?” – Eric

  26. Finding Project Partner(s) Breakout

  27. Monte Carlo Tree Search Video

  28. THE END

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend