monte carlo methods
play

Monte Carlo Methods CS60077: Reinforcement Learning Abir Das IIT - PowerPoint PPT Presentation

Monte Carlo Methods CS60077: Reinforcement Learning Abir Das IIT Kharagpur Oct 05 and 06, 2020 Agenda Introduction MC Evaluation MC Control Agenda Understand how to evaluate policies in model-free setting using Monte Carlo methods


  1. Monte Carlo Methods CS60077: Reinforcement Learning Abir Das IIT Kharagpur Oct 05 and 06, 2020

  2. Agenda Introduction MC Evaluation MC Control Agenda § Understand how to evaluate policies in model-free setting using Monte Carlo methods § Understand Monte Carlo methods in model-free setting for control of Reinforcement Learning problems Abir Das (IIT Kharagpur) CS60077 Oct 05 and 06, 2020 2 / 32

  3. Agenda Introduction MC Evaluation MC Control Resources § Reinforcement Learning by David Silver [Link] § Reinforcement Learning by Balaraman Ravindran [Link] § Monte Carlo Simulation by Nando de Freitas [Link] § SB: Chapter 5 Abir Das (IIT Kharagpur) CS60077 Oct 05 and 06, 2020 3 / 32

  4. Agenda Introduction MC Evaluation MC Control Model Free Setting § Like the previous few lectures, here also we will deal with prediction and control problems but this time it will be in a model-free setting § In model-free setting we do not have the full knowledge of the MDP § Model-free prediction : Estimate the value function of an unknown MDP § Model-free control : Optimise the value function of an unknown MDP § Model-free methods require only experience - sample sequences of states, actions, and rewards ( S 1 , A 1 , R 2 , · · · ) from actual or simulated interaction with an environment. § Actual experince requires no knowledge of the environment’s dynamics. § Simulated experience ‘requires’ models to generate samples only. No knowledge of the complete probability distributions of state transitions is required. In many cases this is easy to do. Abir Das (IIT Kharagpur) CS60077 Oct 05 and 06, 2020 4 / 32

  5. Agenda Introduction MC Evaluation MC Control Monte Carlo § What is the probability that a dart thrown uniformly at random in the unit square will hit the red area? (1,1) (0,1) & (0,0) (1,0) ' , 0 P(area)=? Abir Das (IIT Kharagpur) CS60077 Oct 05 and 06, 2020 5 / 32

  6. Agenda Introduction MC Evaluation MC Control Monte Carlo § What is the probability that a dart thrown uniformly at random in the unit square will hit the red area? (1,1) (0,1) & (0,0) (1,0) ' , 0 P(area)= & ' ⁄ Abir Das (IIT Kharagpur) CS60077 Oct 05 and 06, 2020 6 / 32

  7. Agenda Introduction MC Evaluation MC Control Monte Carlo § What is the probability that a dart thrown uniformly at random in the unit square will hit the red area? (1,1) (0,1) & (0,0) (1,0) ' , 0 P(area)=? Abir Das (IIT Kharagpur) CS60077 Oct 05 and 06, 2020 7 / 32

  8. Agenda Introduction MC Evaluation MC Control Monte Carlo § What is the probability that a dart thrown uniformly at random in the unit square will hit the red area? (1,1) (0,1) & (0,0) (1,0) ' , 0 ' P(area)= 𝜌 & ' ⁄ Abir Das (IIT Kharagpur) CS60077 Oct 05 and 06, 2020 8 / 32

  9. Agenda Introduction MC Evaluation MC Control Monte Carlo § What is the probability that a dart thrown uniformly at random in the unit square will hit the red area? (1,1) (0,1) & (0,0) (1,0) ' , 0 P(area)=? Abir Das (IIT Kharagpur) CS60077 Oct 05 and 06, 2020 9 / 32

  10. Agenda Introduction MC Evaluation MC Control Monte Carlo § What is the probability that a dart thrown uniformly at random in the unit square will hit the red area? (1,1) (0,1) & (0,0) (1,0) ' , 0 # *+, -./+0 P(area)= # -12+ -./+0 Abir Das (IIT Kharagpur) CS60077 Oct 05 and 06, 2020 10 / 32

  11. Agenda Introduction MC Evaluation MC Control Monte Carlo § What is the probability that a dart thrown uniformly at random in the unit square will hit the red area? (1,1) (0,1) x x x x x x xx x x x x 8 x x x x x 19 x x & (0,0) (1,0) ' , 0 # *+,-. /0 ,1* +,1+ P(area)= # *+,-. Abir Das (IIT Kharagpur) CS60077 Oct 05 and 06, 2020 11 / 32

  12. Agenda Introduction MC Evaluation MC Control History of Monte Carlo § The bomb and ENIAC Image taken from: www.livescience.com Image taken from: www.digitaltrends.com Abir Das (IIT Kharagpur) CS60077 Oct 05 and 06, 2020 12 / 32

  13. Agenda Introduction MC Evaluation MC Control Monte Carlo for Expectation Calculation � § Lets say we want to compute E [ f ( x )] = f ( x ) p ( x ) dx � x ( i ) � N § Draw i.i.d. samples i =1 from the probability density p ( x ) Image taken from: Nando de Freitas: MLSS 08 N � δ x ( i ) ( x ) [ δ x ( i ) ( x ) is impulse at x ( i ) on x axis] § Approximate p ( x ) ≈ 1 N i =1 � � N � f ( x ) 1 § E [ f ( x )] = f ( x ) p ( x ) dx ≈ δ x ( i ) ( x ) dx = N i =1 � N N � � � x ( i ) � 1 = 1 f ( x ) δ x ( i ) ( x ) dx f N N i =1 i =1 � �� � f ( x ( i ) ) Abir Das (IIT Kharagpur) CS60077 Oct 05 and 06, 2020 13 / 32

  14. Agenda Introduction MC Evaluation MC Control Monte Carlo Policy Evaluation § Learn v π from episodes of experience under policy π S 1 , A 1 , R 2 , S 2 , A 2 , R 3 , · · · , S k , A k , R k ∼ π § Recall that the return is the total discounted reward: G t = R t +1 + γR t +2 + · · · + γ T − 1 R T § Recall that the value function is the expected return: v π ( s ) = E [ G t | S t = s ] § Monte-Carlo policy evaluation uses empirical mean return instead of expected return Abir Das (IIT Kharagpur) CS60077 Oct 05 and 06, 2020 14 / 32

  15. Agenda Introduction MC Evaluation MC Control First Visit Monte Carlo Policy Evaluation § To evaluate state s i.e. to learn v π ( s ) § The first time-step t that state s is visited in an episode, § Increment counter N ( s ) ← N ( s ) + 1 § Increment total retun S ( s ) ← S ( s ) + G t § Value is estimated by mean return V ( s ) = S ( s ) /N ( s ) § By law of large number, V ( s ) → v π ( s ) as N ( s ) → ∞ Abir Das (IIT Kharagpur) CS60077 Oct 05 and 06, 2020 15 / 32

  16. Agenda Introduction MC Evaluation MC Control Every Visit Monte Carlo Policy Evaluation § To evaluate state s i.e. to learn v π ( s ) § Every time-step t that state s is visited in an episode, § Increment counter N ( s ) ← N ( s ) + 1 § Increment total retun S ( s ) ← S ( s ) + G t § Value is estimated by mean return V ( s ) = S ( s ) /N ( s ) § By law of large number, V ( s ) → v π ( s ) as N ( s ) → ∞ Abir Das (IIT Kharagpur) CS60077 Oct 05 and 06, 2020 16 / 32

  17. Agenda Introduction MC Evaluation MC Control Blackjack Example States (200 of them): Current sum (12-21) Dealer’s showing card (ace-10) Do I have a “useable” ace? (yes-no) Action stick: Stop receiving cards (and terminate) Action twist: Take another card (no replacement) Reward for stick: +1 if sum of cards > sum of dealer cards 0 if sum of cards = sum of dealer cards -1 if sum of cards < sum of dealer cards Reward for twist: -1 if sum of cards > 21 (and terminate) 0 otherwise Transitions: automatically twist if sum of cards < 12 Slide courtesy: David Silver [Deepmind] Abir Das (IIT Kharagpur) CS60077 Oct 05 and 06, 2020 17 / 32

  18. Agenda Introduction MC Evaluation MC Control Blackjack Example Policy: stick if sum of cards ≥ 20, otherwise twist Slide courtesy: David Silver [Deepmind] Abir Das (IIT Kharagpur) CS60077 Oct 05 and 06, 2020 18 / 32

  19. Agenda Introduction MC Evaluation MC Control Monte Carlo Control § We will now, see how Monte Carlo estimation can be used in control . § This is mostly like the generalized policy iteration (GPI) where one maintains both an approximate policy and an approximate value function. Abir Das (IIT Kharagpur) CS60077 Oct 05 and 06, 2020 19 / 32

  20. Agenda Introduction MC Evaluation MC Control Monte Carlo Control § We will now, see how Monte Carlo estimation can be used in control . § This is mostly like the generalized policy iteration (GPI) where one maintains both an approximate policy and an approximate value function. § Policy evaluation is done as Monte Carlo evaluation § Then, we can do greedy policy improvement. Abir Das (IIT Kharagpur) CS60077 Oct 05 and 06, 2020 19 / 32

  21. Agenda Introduction MC Evaluation MC Control Monte Carlo Control § We will now, see how Monte Carlo estimation can be used in control . § This is mostly like the generalized policy iteration (GPI) where one maintains both an approximate policy and an approximate value function. § Policy evaluation is done as Monte Carlo evaluation § Then, we can do greedy policy improvement. § What is the problem!! Abir Das (IIT Kharagpur) CS60077 Oct 05 and 06, 2020 19 / 32

  22. Agenda Introduction MC Evaluation MC Control Monte Carlo Control § We will now, see how Monte Carlo estimation can be used in control . § This is mostly like the generalized policy iteration (GPI) where one maintains both an approximate policy and an approximate value function. § Policy evaluation is done as Monte Carlo evaluation § Then, we can do greedy policy improvement. § What is the problem!! � � § π ′ ( s ) . r ( s, a ) + γ � p ( s ′ | s, a ) v π ( s ′ ) = arg max a ∈A s ′ ∈S Abir Das (IIT Kharagpur) CS60077 Oct 05 and 06, 2020 19 / 32

  23. Agenda Introduction MC Evaluation MC Control Monte Carlo Control § Greedy policy improvement over v ( s ) requires model of MDP � � π ′ ( s ) . r ( s, a ) + γ � p ( s ′ | s, a ) v π ( s ′ ) = arg max a ∈A s ′ ∈S Abir Das (IIT Kharagpur) CS60077 Oct 05 and 06, 2020 20 / 32

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend