breaking the curse of horizon infinite horizon off policy
play

Breaking the Curse of Horizon: Infinite-Horizon Off-Policy - PowerPoint PPT Presentation

Breaking the Curse of Horizon: Infinite-Horizon Off-Policy Estimation Qiang Liu Lihong Li Ziyang Tang Dengyong Zhou Department of Computer Science, The University of Texas at Austin Google Brain (KIR) Liu et al. Breaking


  1. Breaking the Curse of Horizon: Infinite-Horizon Off-Policy Estimation Qiang Liu † Lihong Li ‡ Ziyang Tang † Dengyong Zhou ‡ † Department of Computer Science, The University of Texas at Austin ‡ Google Brain (KIR) Liu et al. Breaking the Curse of Horizon 1 / 7

  2. Off-Policy Reinforcement Learning Off-Policy Evaluation : Evaluate a new policy π by only using data from old policy π 0 . Widely useful when running new RL policies is costly or impossible, due to high cost, risk, or ethics, legal concerns: Healthcare Robotic & Control Advertisement, Recommendation Liu et al. Breaking the Curse of Horizon 2 / 7

  3. “Curse of Horizon” Importance Sampling (IS) : Given trajectory τ = { s t , a t } T t =1 ∼ π 0 , T π ( a t | s t ) � R π = E τ ∼ π 0 [ w ( τ ) R ( τ )] , where w ( τ ) = π 0 ( a t | s t ) t =0 The Curse of Horizon: The IS weights w ( τ ) are product of T terms ; T is horizon length. Variance can grow exponentially with T . Problematic for infinite horizon problems ( T = ∞ ). Liu et al. Breaking the Curse of Horizon 3 / 7

  4. Breaking the Curse Key: Apply IS on ( s , a ) pairs, not the whole trajectory τ : w ( s , a ) = d π ( s , a ) R π = E ( s , a ) ∼ d π 0 [ w ( s , a ) r ( s , a )] , where d π 0 ( s , a ) , where d π ( s , a ) is the stationary / average visitation distribution of ( s , a ) under policy π . Stationary density ratio w ( s , a ) : is NOT product of T terms. can be small even for infinite horizon ( T = ∞ ) . But is more difficult to estimate . Liu et al. Breaking the Curse of Horizon 4 / 7

  5. Main Algorithm 1 1.Estimate density ratio by a new minimax objective : ˆ w = min ˆ w ∈W max L ( w , f , D π 0 ) f ∈F 2 2. Value estimation by IS: R π = ˆ ˆ E ( s , a ) ∼ d π 0 [ ˆ w ( s , a ) r ( s , a )] Theoretical guarantees developed for the new minimax objective. Can be kernelized : Inner max has closed form if F is an RKHS. Liu et al. Breaking the Curse of Horizon 5 / 7

  6. Empirical Results 0 0 -2 Log MSE -2 -4 -4 -6 -6 -8 -8 30 50 100 200 1 2 3 4 5 (a) # of Trajectories ( n ) (b) Different Behavior Policies 0 Traffic control -2 Naive Average Log MSE On Policy (oracle) -4 WIS Trajectory-wise (using SUMO simulator [5] ) -6 WIS Step-wise Our Method -8 200 400 600 800 1000 (c) Truncated Length T Liu et al. Breaking the Curse of Horizon 6 / 7

  7. Thank You! Location: Room 210 & 230 AB; Poster #121 Time: Wed Dec 5th 05:00 – 07:00 PM References & Acknowledgment [1] [HLR’16] K. Hofmann, L. Li, and F. Radlinski. Online evaluation for information retrieval. [2] [JL16] N. Jiang and L. Li. Doubly robust off-policy value evaluation for reinforcement learning. [3] [LMS’15] L. Li, R. Munos, and Cs. Szepesvari. Toward minimax off-policy value estimation. [4] [TB’16] P.S. Thomas and E. Brunskill. Data-efficient off-Policy policy evaluation for reinforcement learning. [5] [KEBB’12] D. Krajzewicz, J.Erdmann, M.Behrisch and L.Bieker. Recent development and applications of SUMO-Simulation of Urban MObility. Work supported in part by NSF CRII 1830161 and Google Cloud. Liu et al. Breaking the Curse of Horizon 7 / 7

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend