Sim2Real Katerina Fragkiadaki So far The requirement of large - - PowerPoint PPT Presentation

sim2real
SMART_READER_LITE
LIVE PREVIEW

Sim2Real Katerina Fragkiadaki So far The requirement of large - - PowerPoint PPT Presentation

Carnegie Mellon School of Computer Science Deep Reinforcement Learning and Control Sim2Real Katerina Fragkiadaki So far The requirement of large number of samples for RL, only possible in simulation, renders RL a model-based framework, we


slide-1
SLIDE 1

Sim2Real

Deep Reinforcement Learning and Control Katerina Fragkiadaki

Carnegie Mellon School of Computer Science

slide-2
SLIDE 2

So far

The requirement of large number of samples for RL, only possible in simulation, renders RL a model-based framework, we can’t really rely (solely) on interaction in the real world (as of today)

  • In the real world, we usually finetune model and policies learnt

in simulation

slide-3
SLIDE 3

Physics Simulators

The requirement of large number of samples for RL, only possible in simulation, renders RL a model-based framework, we can’t really rely (solely) on interaction in the real world (as of today)

Mujoko, bullet, gazeebo, etc.

slide-4
SLIDE 4

Pros of Simulation

  • We can afford many more samples!
  • Safety
  • Avoids wear and tear of the robot
  • Good at rigid multibody dynamics
slide-5
SLIDE 5

Cons of Simulation

  • Under-modeling: many physical events are not modeled.
  • Wrong parameters. Even if our physical equations were correct, we

would need to estimate the right parameters, e.g., inertia, frictions (system identification).

  • Systematic discrepancy w.r.t. the real world regarding:
  • bservations
  • dynamics

as a result, policies that learnt in simulation do not transfer to the real world

  • Hard to simulate deformable objects (finite element methods are very

computational intensive)

slide-6
SLIDE 6
  • Domain randomization (dynamics, images)
  • With enough variability in the simulator, the real world may appear

to the model as just another variation”

  • Learning not from pixels but rather from label maps-> semantic maps

between simulation and real world are closer than textures

  • Learning higher level policies, not low-level controllers, as the low level

dynamics are very different between Sim and REAL

What has shown to work

slide-7
SLIDE 7

Domain Randomization for Transferring Deep Neural Networks from Simulation to the Real World

Tobin et al., 2017 arXiv:1703.06907

Domain randomization

for detecting and grasping objects

slide-8
SLIDE 8

Cuboid Pose Estimation

Let’s try a more fine grained task

slide-9
SLIDE 9

Data Generation

Data generation

slide-10
SLIDE 10

Data Generation

Data generation

slide-11
SLIDE 11

Model Output - Belief Maps

1 2 3 4 5 6

Regressing to vertices

slide-12
SLIDE 12

Baxter’s camera

SIM2REAL

slide-13
SLIDE 13

Data - Contrast and Brightness

Data generation

slide-14
SLIDE 14

Baxter’s camera

SIM2REAL

slide-15
SLIDE 15

Surprising Result

SIM2REAL

slide-16
SLIDE 16

Baxter’s camera

SIM2REAL

slide-17
SLIDE 17

Car detection

Training Deep Networks with Synthetic Data: Bridging the Reality Gap by Domain Randomization, NVIDIA

VKITTI domain rand data generation

slide-18
SLIDE 18

Dynamics randomization

slide-19
SLIDE 19

Ideas:

  • Consider a distribution over simulation models instead of a single one

for learning policies robust to modeling errors that work well under many ``worlds”. Hard model mining

  • Progressively bring the simulation model distribution closer to the real

world.

slide-20
SLIDE 20

Policy Search under model distribution

p: simulator parameters Learn a policy that performs best in expectation over MDPs in the source domain distribution:

slide-21
SLIDE 21

Policy Search under model distribution

p: simulator parameters Learn a policy that performs best in expectation over MDPs in the source domain distribution: Learn a policy that performs best in expectation over the worst \epsilon- percentile of MDPs in the source domain distribution Hard world model mining

slide-22
SLIDE 22

Hard model mining

slide-23
SLIDE 23

Hard model mining results

Hard world mining results in policies with high reward over wider range of parameters

slide-24
SLIDE 24

Adapting the source domain distribution

Sample a set of simulation parameters from a sampling distribution S. Posterior of parameters p_i: Fit a Gaussian model over simulator parameters based on posterior weights of the samples fit of simulation parameter samples: how probable is an observed target state- action trajectory, the more probable the more we prefer such simulation model

slide-25
SLIDE 25

Source Distribution Adaptation

slide-26
SLIDE 26

Performance on hopper policies

trained on Gaussian distribution of mean mass 6 and standard deviation 1.5 trained on single source domains

slide-27
SLIDE 27

Idea: the driving policy is not directly exposed to raw perceptual input or low- level vehicle dynamics.

slide-28
SLIDE 28

Main idea

pixels to steering wheel learning is not SIM2REAL transferable

  • textures/car dynamics mismatch

label maps to waypoint learning is SIM2REAL transferable

  • label maps are similar between SIM and REAL and a low-level controller

will take the car from waypoint to waypoint

slide-29
SLIDE 29
slide-30
SLIDE 30

Maximum Entropy Reinforcement Learning

Deep Reinforcement Learning and Control Katerina Fragkiadaki

Carnegie Mellon School of Computer Science CMU 10703

Parts of slides borrowed from Russ Salakhutdinov, Rich Sutton, David Silver

slide-31
SLIDE 31

RL objective

π* = arg max

π

𝔽π [∑

t

R(st, at)]

slide-32
SLIDE 32

MaxEntRL objective

π* = arg max

π

𝔽π

T

t=1

R(st, at)

reward

+α H(π( ⋅ |st))

entropy

Why?

  • Better exploration
  • Learning alternative ways of accomplishing the task
  • Better generalization, e.g., in the presence of obstacles a stochastic

policy may still succeed. Promoting stochastic policies

slide-33
SLIDE 33

Principle of Maximum Entropy

Haarnoja et al., Reinforcement Learning with Deep Energy-Based Policies

Policies that generate similar rewards, should be equally probable. We do not want to commit to one policy over the other. Why?

  • Better exploration
  • Learning alternative ways of accomplishing the task
  • Better generalization, e.g., in the presence of obstacles a stochastic

policy may still succeed.

slide-34
SLIDE 34

dθ ← dθ + ∇θ′log π(ai|si; θ′)(R − V(si; θ′

v)+β∇θ′H(π(st; θ′)))

Mnih et al., Asynchronous Methods for Deep Reinforcement Learning

“We also found that adding the entropy of the policy π to the objective function improved exploration by discouraging premature convergence to suboptimal deterministic policies. This technique was originally proposed by (Williams & Peng, 1991)”

slide-35
SLIDE 35

dθ ← dθ + ∇θ′log π(ai|si; θ′)(R − V(si; θ′

v)+β∇θ′H(π(st; θ′)))

Mnih et al., Asynchronous Methods for Deep Reinforcement Learning

This is just a regularization: such gradient just maximizes entropy of the current time step, not of future timesteps.

slide-36
SLIDE 36

MaxEntRL objective

π* = arg max

π

𝔽π

T

t=1

R(st, at)

reward

+α H(π( ⋅ |st))

entropy

How can we maximize such an objective? Promoting stochastic policies

slide-37
SLIDE 37

qπ(s, a) = r(s, a) + γ ∑

s′∈𝒯

T(s′|s, a) ∑

a′∈𝒝

π(a′|s′)qπ(s′, a′)

Recall:Back-up Diagrams

slide-38
SLIDE 38

Back-up Diagrams for MaxEnt Objective

H(π( ⋅ |s′)) = − 𝔽a log π(a′|s′)

π* = arg max

π

𝔽π

T

t=1

R(st, at)

reward

+α H(π( ⋅ |st))

entropy

slide-39
SLIDE 39

Back-up Diagrams for MaxEnt Objective

−log π(a′|s′)

qπ(s, a) = r(s, a) + γ ∑

s′∈𝒯

T(s′|s, a) ∑

a′∈𝒝

π(a′|s′)(qπ(s′, a′)−log(π(a′|s′))

π* = arg max

π

𝔽π

T

t=1

R(st, at)

reward

+α H(π( ⋅ |st))

entropy

slide-40
SLIDE 40

(Soft) policy evaluation

qπ(s, a) = r(s, a) + γ∑

s′

T(s′|s, a′)∑

a′

π(a′|s′)(qπ(s′, a′)−log(π(a′|s′))

Bellman backup equation: Soft Bellman backup equation:

qπ(s, a) = r(s, a) + γ ∑

s′∈𝒯

T(s′|s, a) ∑

a′∈𝒝

π(a′|s′)qπ(s′, a′)

Q(st, at) ← r(st, at) + γ𝔽st+1,at+1 [Q(st+1, at+1)−log π(at+1|st+1)]]

Q(st, at) ← r(st, at) + γ𝔽st+1,at+1Q(st+1, at+1)

Bellman backup update operator-unknown dynamics: Soft Bellman backup update operator-unknown dynamics:

slide-41
SLIDE 41

Soft Bellman backup update operator is a contraction

Q(st, at) ← r(st, at) + γ𝔽st+1,at+1 [Q(st+1, at+1)−log π(at+1|st+1)]]

Q(st, at) ← r(st, at) + γ𝔽st+1∼ρ[𝔽at+1∼π[Q(st+1, at+1) − log π(at+1|st+1)]] ← r(st, at) + γ𝔽st+1∼ρ,at+1∼πQ(st+1, at+1) + γ𝔽st+1∼ρ𝔽at+1∼π[−log π(at+1|st+1)] ← r(st, at) + γ𝔽st+1∼ρ,at+1∼πQ(st+1, at+1) + γ𝔽st+1∼ρH(π( ⋅ |st+1)) rsoft(st, at) = r(st, at) + γ𝔽st+1∼ρH(π( ⋅ |st+1)) Rewrite the reward as: Then we get the old Bellman operator, which we know is a contraction

slide-42
SLIDE 42

Soft Bellman backup update operator

Q(st, at) ← r(st, at) + γ𝔽st+1,at+1 [Q(st+1, at+1)−log π(at+1|st+1)]]

Q(st, at) ← r(st, at) + γ𝔽st+1∼ρ[𝔽at+1∼π[Q(st+1, at+1) − log π(at+1|st+1)]] ← r(st, at) + γ𝔽st+1∼ρ,at+1∼πQ(st+1, at+1) + γ𝔽st+1∼ρ𝔽at+1∼π[−log π(at+1|st+1)] ← r(st, at) + γ𝔽st+1∼ρ,at+1∼πQ(st+1, at+1) + γ𝔽st+1∼ρH(π( ⋅ |st+1))

Q(st, at) ← r(st, at) + γ𝔽st+1∼ρ[V(st+1)]

We know that:

V(st) = 𝔽at∼π[Q(st, at) − log π(at|st)]

Which means that:

slide-43
SLIDE 43

Soft Policy Iteration

Soft policy iteration iterates between two steps:

  • 1. Soft policy evaluation: Fix policy, apply Bellman backup operator till

convergence This converges to qπ

qπ(s, a) = r(s, a) + 𝔽s′,a′(qπ(s′, a′)−log(π(a′|s′))

Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor

π′ = arg min

πk∈Π DKL (πk( ⋅ |st)|| exp(Qπ(st, ⋅ ))

Zπ(st) )

  • 2. Soft policy improvement: Update the policy:
slide-44
SLIDE 44

SoftMax

slide-45
SLIDE 45

Soft Policy Iteration

Leads to a sequence of policies with monotonically increasing soft q values Soft policy iteration iterates between two steps:

  • 1. Soft policy evaluation: Fix policy, apply Bellman backup operator till

convergence This converges to qπ

qπ(s, a) = r(s, a) + 𝔽s′,a′(qπ(s′, a′)−log(π(a′|s′))

This so far concerns tabular methods. Next we will use function approximations for policy and action values

Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor

π′ = arg min

πk∈Π DKL (πk( ⋅ |st)|| exp(Qπ(st, ⋅ ))

Zπ(st) )

  • 2. Soft policy improvement: Update the policy:
slide-46
SLIDE 46

Soft Policy Iteration - Approximation

Use function approximations for policy, state and action value functions Vψ(st) Qθ(st) πϕ(at|st)

  • 1. Learning the state value function:
slide-47
SLIDE 47

Soft Policy Iteration - Approximation

Use function approximations for policy, state and action value functions Vψ(st) Qθ(st) πϕ(at|st)

  • 2. Learning the state-action value function:
slide-48
SLIDE 48

Soft Policy Iteration - Approximation

Use function approximations for policy, state and action value functions Vψ(st) Qθ(st, at) πϕ(at|st)

  • 3. Learning the policy:

∇ϕJπ(ϕ) = ∇ϕ𝔽st∈D𝔽at∼πϕ(a|st) log πϕ(at|st)

exp(Qθ(st, at)) Zθ(st)

The variable w.r.t. which we take gradient parametrizes the distribution inside the distribution.

Zθ(st) = ∫𝒝 exp(Qθ(st, at))dat

independent of \phi ∇ϕJπ(ϕ) = ∇ϕ𝔽st∈D𝔽at∼πϕ(a|st) log πϕ(at|st) exp(Qθ(st, at))

slide-49
SLIDE 49

Soft Policy Iteration - Approximation

Use function approximations for policy, state and action value functions Vψ(st) Qθ(st) πϕ(at|st)

  • 3. Learning the policy:

Reparametrization trick. The policy becomes a deterministic function of Gaussian random variables (fixed Gaussian distribution): at = fϕ(st, ϵ) = μϕ(st) + ϵΣϕ(st), ϵ ∼ 𝒪(0,I) ∇ϕJπ(ϕ) = ∇ϕ𝔽st∈D𝔽at∼πϕ(a|st) log πϕ(at|st) exp(Qθ(st, at)) ∇ϕJπ(ϕ) = ∇ϕ𝔽st∈D,ϵ∼𝒪(0,I) log πϕ(at|st) exp(Qθ(st, at))

slide-50
SLIDE 50

Soft Policy Iteration - Approximation

slide-51
SLIDE 51
slide-52
SLIDE 52

Composability of Maximum Entropy Policies

Composable Deep Reinforcement Learning for Robotic Manipulation, Haarnoja et al.

Imagine we want to satisfy two objectives at the same time, e.g., pick an

  • bject up while avoiding an obstacle. We would learn a policy to maximize

the addition of the the corresponding reward functions: MaxEnt policies permit to obtain the resulting policy’s optimal Q by simply adding the constituent Qs: We can theoretically bound the suboptimality of the resulting policy w.r.t. the policy trained under the addition of rewards. We cannot do this for deterministic policies.

slide-53
SLIDE 53

Composability of Maximum Entropy Policies

Composable Deep Reinforcement Learning for Robotic Manipulation, Haarnoja et al.

slide-54
SLIDE 54

https://youtu.be/wdexoLS2cWU