A Quick Look at the Reinforcement Learning course A. LAZARIC ( - - PowerPoint PPT Presentation

a quick look at the reinforcement learning course
SMART_READER_LITE
LIVE PREVIEW

A Quick Look at the Reinforcement Learning course A. LAZARIC ( - - PowerPoint PPT Presentation

A Quick Look at the Reinforcement Learning course A. LAZARIC ( SequeL Team @INRIA-Lille ) ENS Cachan - Master 2 MVA SequeL INRIA Lille MVA-RL Course Why A. LAZARIC Introduction to Reinforcement Learning Sept 27, 2013 - 2/24


slide-1
SLIDE 1

MVA-RL Course

A Quick Look at the “Reinforcement Learning” course

  • A. LAZARIC (SequeL Team @INRIA-Lille)

ENS Cachan - Master 2 MVA

SequeL – INRIA Lille

slide-2
SLIDE 2

Why

  • A. LAZARIC – Introduction to Reinforcement Learning

Sept 27, 2013 - 2/24

slide-3
SLIDE 3

Why: Important Problems

  • A. LAZARIC – Introduction to Reinforcement Learning

Sept 27, 2013 - 3/24

slide-4
SLIDE 4

Why: Important Problems

◮ Autonomous robotics ◮ Elder care ◮ Exploration of

unknown/dangerous environments

◮ Robotics for entertainment

  • A. LAZARIC – Introduction to Reinforcement Learning

Sept 27, 2013 - 4/24

slide-5
SLIDE 5

Why: Important Problems

◮ Autonomous robotics ◮ Financial applications ◮ Trading execution algorithms ◮ Portfolio management ◮ Option pricing

  • A. LAZARIC – Introduction to Reinforcement Learning

Sept 27, 2013 - 5/24

slide-6
SLIDE 6

Why: Important Problems

◮ Autonomous robotics ◮ Financial applications ◮ Energy management ◮ Energy grid integration ◮ Maintenance scheduling ◮ Energy market regulation ◮ Energy production

management

  • A. LAZARIC – Introduction to Reinforcement Learning

Sept 27, 2013 - 6/24

slide-7
SLIDE 7

Why: Important Problems

◮ Autonomous robotics ◮ Financial applications ◮ Energy management ◮ Recommender systems ◮ Web advertising ◮ Product recommendation ◮ Date matching

  • A. LAZARIC – Introduction to Reinforcement Learning

Sept 27, 2013 - 7/24

slide-8
SLIDE 8

Why: Important Problems

◮ Autonomous robotics ◮ Financial applications ◮ Energy management ◮ Recommender systems ◮ Social applications ◮ Bike sharing optimization ◮ Election campaign ◮ ER service optimization ◮ Resource distribution

  • ptimization
  • A. LAZARIC – Introduction to Reinforcement Learning

Sept 27, 2013 - 8/24

slide-9
SLIDE 9

Why: Important Problems

◮ Autonomous robotics ◮ Financial applications ◮ Energy management ◮ Recommender systems ◮ Social applications ◮ And many more...

  • A. LAZARIC – Introduction to Reinforcement Learning

Sept 27, 2013 - 9/24

slide-10
SLIDE 10

What

  • A. LAZARIC – Introduction to Reinforcement Learning

Sept 27, 2013 - 10/24

slide-11
SLIDE 11

What: Decision-Making under Uncertainty

Agent Environment

state / actuation action / perception

  • A. LAZARIC – Introduction to Reinforcement Learning

Sept 27, 2013 - 11/24

slide-12
SLIDE 12

How: Reinforcement Learning

Reinforcement learning is learning what to do – how to map situations to actions – so as to maximize a numerical reward signal. The learner is not told which actions to take, as in most forms of machine learning, but instead must discover which actions yield the most reward by trying them (trial–and–error). In the most interesting and challenging cases, actions may affect not only the immediate reward but also the next situation and, through that, all subsequent rewards (delayed reward).

“An introduction to reinforcement learning”, Sutton and Barto (1998).

  • A. LAZARIC – Introduction to Reinforcement Learning

Sept 27, 2013 - 12/24

slide-13
SLIDE 13

How: the Course

Agent Environment

state / actuation action / perception

Formal and rigorous approach to the RL’s way to decision-making under uncertainty

  • A. LAZARIC – Introduction to Reinforcement Learning

Sept 27, 2013 - 13/24

slide-14
SLIDE 14

What: the Highlights of the Course

How do we formalize the agent-environment interaction?

Markov Decision Process and Policy

A Markov decision process (MDP) is represented by the tuple M = X, A, r, p where X is the state space, A is the action space, r : X × A → [0, B] is the reward function, p is the dynamics. At time t ∈ N a decision rule πt : X → A is a mapping from states to actions and a policy (strategy, plan) is a sequence of decision rules π = (π0, π1, π2, . . . ).

The Bellman equations

V π(x) = r(x, π(x)) + γ

  • y

p(y|x, π(x))V π(y), V ∗(x) = max

a∈A

  • r(x, a) + γ
  • y

p(y|x, a)V ∗(y)

  • .
  • A. LAZARIC – Introduction to Reinforcement Learning

Sept 27, 2013 - 14/24

slide-15
SLIDE 15

What: the Highlights of the Course

How do we solve an MDP?

Dynamic Programming

Value Iteration Vk+1 = T Vk Policy Iteration

◮ Evaluate: given πk compute V πk. ◮ Improve: given V πk compute πk+1 = greedy(V πk)

  • A. LAZARIC – Introduction to Reinforcement Learning

Sept 27, 2013 - 15/24

slide-16
SLIDE 16

What: the Highlights of the Course

How do we solve an MDP “online”?

Q-learning

Given a observed transition x, a, x′, r update Qk+1(x, a) = (1 − α)Qk(x, a) + α

  • r + max

a′ Qk(x′, a′)

  • .
  • A. LAZARIC – Introduction to Reinforcement Learning

Sept 27, 2013 - 16/24

slide-17
SLIDE 17

What: the Highlights of the Course

How do we effectively trade-off exploration and exploitation?

Multi-arm Bandit

Given K arms we define the regret over n rounds of a bandit strategy as Rn =

n

  • t=1

Xi∗,t −

n

  • t=1

XIt,t. For the UCB strategy we can prove Rn ≤

  • i=i∗

b2 ∆i log(n).

  • A. LAZARIC – Introduction to Reinforcement Learning

Sept 27, 2013 - 17/24

slide-18
SLIDE 18

What: the Highlights of the Course

How do we solve a “huge” MDP?

Approximate Dynamic Programming

Approximate Value Iteration ˆ Vk+1 = T ˆ Vk Approximate Policy Iteration

◮ Evaluate: given πk compute ˆ

V πk.

◮ Improve: given ˆ

V πk compute ˆ πk+1 ≈ greedy( ˆ V πk)

  • A. LAZARIC – Introduction to Reinforcement Learning

Sept 27, 2013 - 18/24

slide-19
SLIDE 19

What: the Highlights of the Course

How “sample-efficient” are these algorithms?

Sample Complexity of LSPI

||V πK − V ∗||2,ρ ≤ inf

f ∈F ||V ∗ − f ||2,ρ +

Cρ 1 − γ

  • log(1/δ)

n .

  • A. LAZARIC – Introduction to Reinforcement Learning

Sept 27, 2013 - 19/24

slide-20
SLIDE 20

See you on Tue at 11h in C103!

Amphi Marie Curie Amphi e-media Amphi 109 Amphi 121 Amphi Tocqueville Bretécher
  • S. des Conférences
  • S. Visio DSI
  • S. Renaudeau
Uderzo Condorcet
  • S. des Comm.
C518

FCD

Fonteneau 131 bis 131 132 1 3 3 135
  • A. LAZARIC – Introduction to Reinforcement Learning

Sept 27, 2013 - 20/24

slide-21
SLIDE 21

Who

Lectures Alessandro LAZARIC SequeL Team INRIA-Lille Nord Europe

alessandro.lazaric@inria.fr researchers.lille.inria.fr/˜lazaric/

Practical Sessions Emilie KAUFMANN Telecom ParisTech

emilie.kaufmann@telecom-paristech.fr perso.telecom-paristech.fr/˜kaufmann/

  • A. LAZARIC – Introduction to Reinforcement Learning

Sept 27, 2013 - 21/24

slide-22
SLIDE 22

When/What/Where

Date Topic Classroom 01/10 Intro/MDP C103 08/10 Dynamic Programming C103 15/10 RL Algorithms C103 22/10 TP on DP and RL C109 29/10 Multi-arm Bandit (1) C103 05/11 TP on Bandit C109 12/11 Multi-arm Bandit (2) C103 19/11 TP on Bandit C109 26/11 Approximate DP C103 03/12 Sample Complexity of ADP C103 10/12 TP on ADP C109 17/12 Guest lectures + Internships C103 (TBC) 14/01 Evaluation C103 (TBC) Lectures are from 11am to 1pm, TP should be from 11am to 1:15pm.

  • A. LAZARIC – Introduction to Reinforcement Learning

Sept 27, 2013 - 22/24

slide-23
SLIDE 23

Evaluation

◮ Papers review + oral presentation ◮ Projects ◮ Stages ◮ PhD

  • A. LAZARIC – Introduction to Reinforcement Learning

Sept 27, 2013 - 23/24

slide-24
SLIDE 24

Reinforcement Learning

Alessandro Lazaric alessandro.lazaric@inria.fr sequel.lille.inria.fr