Learning to plan: Applications of search to robotics Kevin Xie* - - PowerPoint PPT Presentation

learning to plan applications of search to robotics
SMART_READER_LITE
LIVE PREVIEW

Learning to plan: Applications of search to robotics Kevin Xie* - - PowerPoint PPT Presentation

Learning to plan: Applications of search to robotics Kevin Xie* and Homanga Bharadhwaj* *1st year Msc. students in Computer Science Probabilistic Planning via Sequential Monte Carlo Model-based RL method Control as Inference heuristic


slide-1
SLIDE 1

Learning to plan: Applications of search to robotics

Kevin Xie* and Homanga Bharadhwaj*

*1st year Msc. students in Computer Science

slide-2
SLIDE 2

Probabilistic Planning via Sequential Monte Carlo

Model-based RL method Control as Inference heuristic Sequential Monte Carlo action sampling

slide-3
SLIDE 3

Sequential Monte Carlo Tutorial

A method for sampling from sequential distributions.

slide-4
SLIDE 4

“Perfect” Monte Carlo (MC)

Integral intractable: But can sample easily. -> Approximate p(x) with N samples from p(x):

Empirical Measure MC Estimate https://www.stats.ox.ac.uk/~doucet/doucet_defreitas_gordon_smcbookintro.pdf [1.3.1]

slide-5
SLIDE 5

“Perfect” Monte Carlo (MC)

p(x)

slide-6
SLIDE 6

Importance Sampling (IS)

Integral intractable and can’t sample easily. But can sample from q(x). -> Approximate p(x) with N samples from q(x).

https://www.stats.ox.ac.uk/~doucet/doucet_defreitas_gordon_smcbookintro.pdf [1.3.2]

slide-7
SLIDE 7

Importance Sampling

p(x) q(x)

slide-8
SLIDE 8

Sequential Monte Carlo (SMC)

Want to sample sequence: From: Step Initial Distribution

slide-9
SLIDE 9

Sequential Importance Sampling (SIS)

Sample from a proposal distribution: Step Initial Distribution

Time x

slide-10
SLIDE 10

t=1

Proposal Particles Standard Importance Sampling Time 1 Particles

slide-11
SLIDE 11

Time 1 Particles

t-1 t

Sequence or “branch” Time 2 Proposal Particles

slide-12
SLIDE 12

Time 1 Particles Step Importance Ratio Update Importance Weights

t-1 t

Time 2 Particles Time 2 Proposal Particles

slide-13
SLIDE 13

Time 1 Particles Step Importance Ratio Update Importance Weights

t-1 t

Time 2 Particles Time 2 Proposal Particles But weights could become very small

slide-14
SLIDE 14

t-1 t

Replacement Step:

  • Discontinue low weight branches
  • Refocus particles on high weight

branches

SIS with Replacement

slide-15
SLIDE 15

SMC: SIS with Replacement

Only high probability branches survive. Still representative of the

  • verall distribution.
slide-16
SLIDE 16

Model-based RL

Learns a model of the environment and uses it for RL

  • Model Predictive Planning (f.e. PETS [Chua et al. 2018])

○ Simulate actions into the future ○ Pick ones that gave good value

slide-17
SLIDE 17

Control as Inference

Proposes a heuristic for selecting actions. Current belief of the agent: Action A: Lose 1 dollar on average (higher chance to be “optimal”) Action B: Lose 2 dollars on average Control as inference: Choose Action A more often than B. But sometimes still choose B.

slide-18
SLIDE 18

Control as Inference

To define this formally:

Optimality Variable

Suppose an “optimal” future. Given that agent will lose as little money as possible, Sample actions according to how likely they would have led to this “optimality”. which action did I likely take?

slide-19
SLIDE 19

What is probability of “optimal”?

Heuristic: Exponential Lower reward

  • >

Exponentially less likely of being ‘optimal’

  • >

Exponentially less likely to be sampled

Reward (Always negative)

slide-20
SLIDE 20

MDP Setting

MDP: Optimality at every point in time. Choose action proportional to chance of optimality over time.

slide-21
SLIDE 21

But inference is hard =(

Can’t efficiently sample from true posterior.

slide-22
SLIDE 22

SMC to the Rescue

Want to sample futures given they are optimal:

How to do this?

Need a good proposal q(x1:h) Model Policy q(a|s)

slide-23
SLIDE 23

Soft Actor Critic (SAC) [Haarnoja et. al 2018]

SAC (fairly SOTA model-free RL) learns approximate Control as Inference. Gives us an approximate proposal policy q(a|s).

slide-24
SLIDE 24

Planning as Inference

Need maximum sequence length to be practical.

SMC What to do about this?

slide-25
SLIDE 25

Planning as Inference

Need maximum sequence length to be practical.

SMC

SAC has a learned approximation.

slide-26
SLIDE 26

Planning as Inference

Related to MCTS in AlphaGo Zero. We started with an approximate model-free proposal policy q and a value V (from SAC). Then we looked into the future with our model via SMC. Which allowed us to pick a more accurate action (according to Control as Inference).

slide-27
SLIDE 27

Scope and Limitations

Weight update assumes model is perfectly accurate. When environment is stochastic, encourages risk seeking behaviours.

slide-28
SLIDE 28

QMDP-Net

  • Planning under partial observations
  • Learn model of environment and planner

simultaneously and end to end

  • Learned model uses discrete states and

actions

  • Policy is trained by imitating expert data

(supervised learning)

slide-29
SLIDE 29

Related Work

  • Value Iteration Networks: Fully differentiable neural

network architecture for learning to plan. It embeds both a learned model of the environment and a value iteration planning module within. However, it assumes a fully

  • bservable setting and hence does not need filtering.
  • Bayesian Filtering: Common in robotics. Continuously

update a robot’s belief about its state based on most recent sensor data. Recent works have shown this process to be end-to-end differentiable.

Policy Model Planner s a Bayesian Filter

slide-30
SLIDE 30

Main Contribution

  • Extends VIN by also embedding a Bayesian Filter
  • The entire framework is end-to-end differentiable
slide-31
SLIDE 31

POMDP (Partially Observable MDP)

  • Definition: POMDP is defined by the following components

State space Latent Action space Expert Data Observation space Expert Data State transition function Learned by NN Observation transition Learned by NN Reward function Learned by NN

slide-32
SLIDE 32

POMDP - Bayesian Filtering

  • The agent does not know its exact state and maintains a belief (a probability

distribution) over all the states S

  • Belief is recursively updated from past history

New

  • bservation

Transition from previous belief

slide-33
SLIDE 33

POMDP

  • The planning objective is to obtain a policy that maximizes the expected total

discounted reward:

  • Solving POMDPs exactly is computationally intractable in the worst case***

(intuitively, because we need to integrate over all states - blowup!)

  • Approximate solutions needed

***

slide-34
SLIDE 34

QMDP-net: Overall architecture

  • There are two main components: the QMDP planner (similar to VIN) and the

Bayesian filter

slide-35
SLIDE 35

QMDP Planner Module

  • The planner module performs value iteration (each step is differentiable). The

architecture is very similar to Value Iteration Networks (VIN)

  • Iteratively apply Bellman updates to the Q value map over states to refine it
slide-36
SLIDE 36

Action selection

  • The obtained Q value map is weighed by the computed belief over states to
  • btain a probability distribution over actions
  • Select the action with maximum q( ) value
slide-37
SLIDE 37

Highlights, Scope, and Limitations

  • Only demonstrate on Imitation Learning (RL is possible in principle)
  • Bayes filter is not “exact” but “useful”
  • Discrete action and state model unlikely to scale to more complicated

environments

slide-38
SLIDE 38

Thank you for your time!

We will be happy to take questions

slide-39
SLIDE 39

Appendix... next few slides

Stuff we didn’t have time for...

slide-40
SLIDE 40

Importance Sampling (IS)

Integral intractable and can’t sample easily. But can sample from q(x). -> Approximate p(x) with N samples from q(x).

https://www.stats.ox.ac.uk/~doucet/doucet_defreitas_gordon_smcbookintro.pdf [1.3.2]

Also need to be able to evaluate p(x) exactly!

slide-41
SLIDE 41

Importance Sampling with Self-Normalized Weights

Integral intractable and can’t sample easily and can’t evaluate p(x). But can evaluate p(x) upto normalizing constant. Note: Very important for posterior inference: Almost always hard

slide-42
SLIDE 42

Importance Sampling with Self-Normalized Weights

Integral intractable and can’t sample easily and can’t evaluate p(x). But can evaluate p(x) upto normalizing constant. If we try defining the weight, ignoring C: We see that our IS estimate is off by the multiplicative constant:

slide-43
SLIDE 43

Importance Sampling with Self-Normalized Weights

Integral intractable and can’t sample easily and can’t evaluate p(x). But can evaluate p(x) upto normalizing constant. If we try defining the weight, ignoring C: We see that our IS estimate is off by the multiplicative constant: Idea: Normalize the weights!

slide-44
SLIDE 44

Importance Sampling with Self-Normalized Weights

What if we normalize w(x)? Average weight is an estimate of C: Normalizing by weights amounts to normalizing by C:

slide-45
SLIDE 45

Importance Sampling with Self-Normalizing Weights

Normalizing by weights amounts to normalizing by C: Which motivates: We explicitly normalize the weights so that they sum to 1. (Diverge from theory -> incurs a bias but helps with variance reduction)

slide-46
SLIDE 46

Sequential Importance Sampling (SIS)

Sample from a proposal distribution: Update Initial Distribution

slide-47
SLIDE 47

The overall algorithm

1. Sample actions from prior

slide-48
SLIDE 48

The overall algorithm

1. Sample actions from prior 2. Simulate with model

slide-49
SLIDE 49

The overall algorithm

1. Sample actions from prior 2. Simulate with model 3. Update weight of each branch using reward and SAC ‘Value’

slide-50
SLIDE 50

The overall algorithm

1. Sample actions from prior 2. Simulate with model 3. Update weight of each branch using reward and SAC ‘Value’ 4. Reallocate search particles to more promising branches

slide-51
SLIDE 51

The overall algorithm

1. Sample actions from prior 2. Simulate with model 3. Update weight of each branch using reward and SAC ‘Value’ 4. Reallocate search particles to more promising branches 5. Repeat until horizon

slide-52
SLIDE 52

The overall algorithm

1. Sample actions from prior 2. Simulate with model 3. Update weight of each branch using reward and SAC ‘Value’ 4. Reallocate search particles to more promising branches 5. Repeat until horizon 6. Randomly select first action from remaining branches

slide-53
SLIDE 53

Deriving weight updates (read the paper for details)

slide-54
SLIDE 54

Connection to MCTS in AlphaGo Zero

Planning with SMC AlphaGo Zero Move selection criteria Q upper confidence bound Environment model Learned p_model Self-play p Amortised prior policy q from SAC Learned prior p Amortised prior “value” V from SAC V upper confidence

slide-55
SLIDE 55

Sequential Importance Sampling

Grow sequence incrementally: Update w recursively: But most particles might become useless (w->0)