CS287 Advanced Robotics Lecture 4 (Fall 2019) Function - - PowerPoint PPT Presentation

cs287 advanced robotics lecture 4 fall 2019 function
SMART_READER_LITE
LIVE PREVIEW

CS287 Advanced Robotics Lecture 4 (Fall 2019) Function - - PowerPoint PPT Presentation

CS287 Advanced Robotics Lecture 4 (Fall 2019) Function Approximation Pieter Abbeel UC Berkeley EECS Value Iteration Impractical for Algorithm: large state spaces Start with for all s. For i = 1, , H For all


slide-1
SLIDE 1

CS287 Advanced Robotics Lecture 4 (Fall 2019) Function Approximation

Pieter Abbeel UC Berkeley EECS

slide-2
SLIDE 2

Value Iteration

Algorithm:

Start with for all s. For i = 1, … , H For all states s in S: This is called a value update or Bellman update/back-up

= expected sum of rewards accumulated starting from state s, acting optimally for i steps = optimal action when in state s and getting to act for i steps

Impractical for large state spaces Similar issue for policy iteration and linear programming

slide-3
SLIDE 3

n Function approximation n Value iteration with function approximation n Policy iteration with function approximation n Linear programming with function approximation

Outline

slide-4
SLIDE 4

n

state: board configuration + shape of the falling piece ~2200 states!

n

action: rotation and translation applied to the falling piece

n

22 features aka basis functions

n

Ten basis functions, 0, . . . , 9, mapping the state to the height h[k] of each column.

n

Nine basis functions, 10, . . . , 18, each mapping the state to the absolute difference between heights of successive columns: |h[k+1] − h[k]|, k = 1, . . . , 9.

n

One basis function, 19, that maps state to the maximum column height: maxk h[k]

n

One basis function, 20, that maps state to the number of ‘holes’ in the board.

n

One basis function, 21, that is equal to 1 in every state.

[Bertsekas & Ioffe, 1996 (TD); Bertsekas & Tsitsiklis 1996 (TD); Kakade 2002 (policy gradient); Farias & Van Roy, 2006 (approximate LP)]

ˆ Vθ(s) =

21

X

i=0

θiφi(s) = θ>φ(s)

φi

Function Approximation Example 1 : Tetris

slide-5
SLIDE 5

Function Approximation Example 2: Pacman

V(s) = + “distance to closest ghost” + “distance to closest power pellet” + “in dead-end” + “closer to power pellet than ghost” + … =

θ0 θ1 θ2 θ3

n

X

i=0

θiφi(s) = θ>φ(s)

θ4

slide-6
SLIDE 6

n 0’th order approximation (1-nearest neighbor):

Function Approximation Example 3: Nearest Neighbor

. . . . . . . . . . . .

x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 x12

.

s

ˆ V (s) = ˆ V (x4) = θ4 Only store values for x1, x2, …, x12 – call these values Assign other states value of nearest “x” state

θ1, θ2, . . . , θ12

φ(s) =           1 . . .          

ˆ V (s) = θ>φ(s)

slide-7
SLIDE 7

n 1’th order approximation (k-nearest neighbor interpolation):

Only store values for x1, x2, …, x12 – call these values Assign other states interpolated value of nearest 4 “x” states

θ1, θ2, . . . , θ12

ˆ V (s) = θ>φ(s) ˆ V (s) = φ1(s)θ1 + φ2(s)θ2 + φ5(s)θ5 + φ6(s)θ6

φ(s) =               0.2 0.6 0.05 0.15 . . .              

. . . . . . . . . . . .

x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 x12

.

s

Function Approximation Example 4: k-Nearest Neighbor

slide-8
SLIDE 8

n Examples:

n n n n

(e.g. neural net)

S = R, ˆ V (s) = θ1 + θ2s

S = R, ˆ V (s) = θ1 + θ2s + θ3s2

S = R, ˆ V (s) =

n

X

i=0

θisi

More Function Approximation Examples

slide-9
SLIDE 9

n

Main idea:

n

Use approximation of the true value function ,

n

is a free parameter to be chosen from its domain

n Representation size: downto

+ : less parameters to estimate

  • : less expressiveness,

because typically there exist many V* for which there is no such that

Function Approximation

|S| |Θ| Θ θ θ ˆ Vθ

slide-10
SLIDE 10

n Given:

n set of examples

n Asked for:

n “best”

n Representative approach: find through least squares

Supervised Learning

ˆ Vθ

min

θ∈Θ m

X

i=1

( ˆ Vθ(s(i)) − V (s(i)))2

θ

(s(1), V (s(1))), (s(2), V (s(2))), . . . , (s(m), V (s(m))),

slide-11
SLIDE 11

n Linear regression

Supervised Learning Example

20

Error or “residual” Prediction Observation

min

θ0,θ1 n

X

i=1

(θ0 + θ1x(i) − y(i))2

slide-12
SLIDE 12

n Neural Nets

Supervised Learning Example

slide-13
SLIDE 13

Full Stack Deep Learning (March 2019) Pieter Abbeel, Sergey Karayev, Josh Tobin L0: Background

Single (Biological) Neuron

[image source: cs231n.stanford.edu]

slide-14
SLIDE 14

Full Stack Deep Learning (March 2019) Pieter Abbeel, Sergey Karayev, Josh Tobin L0: Background

Single (Artificial) Neuron

g g

[image source: cs231n.stanford.edu]

slide-15
SLIDE 15

Full Stack Deep Learning (March 2019) Pieter Abbeel, Sergey Karayev, Josh Tobin L0: Background

Common Activation Functions

[source: MIT 6.S191 introtodeeplearning.com]

slide-16
SLIDE 16

Full Stack Deep Learning (March 2019) Pieter Abbeel, Sergey Karayev, Josh Tobin L0: Background

Neural Network

x z(1) z(2) z(3)

y = f(x, w)

Notation: Choice of w determines the function from x --> y

1 9

slide-17
SLIDE 17

Full Stack Deep Learning (March 2019) Pieter Abbeel, Sergey Karayev, Josh Tobin L0: Background

What Functions Can a Neural Net Represent?

[images source: neuralnetworksanddeeplearning.com] Does there exist a choice for w to make this work?

slide-18
SLIDE 18

Full Stack Deep Learning (March 2019) Pieter Abbeel, Sergey Karayev, Josh Tobin L0: Background

Universal Function Approximation Theorem

n

In words: Given any continuous function f(x), if a 2-layer neural network has enough hidden units, then there is a choice of weights that allow it to closely approximate f(x).

Cybenko (1989) “Approximations by superpositions of sigmoidal functions” Hornik (1991) “Approximation Capabilities of Multilayer Feedforward Networks” Leshno and Schocken (1991) ”Multilayer Feedforward Networks with Non-Polynomial Activation Functions Can Approximate Any Function”

slide-19
SLIDE 19

Full Stack Deep Learning (March 2019) Pieter Abbeel, Sergey Karayev, Josh Tobin L0: Background

Universal Function Approximation Theorem

Cybenko (1989) “Approximations by superpositions of sigmoidal functions” Hornik (1991) “Approximation Capabilities of Multilayer Feedforward Networks” Leshno and Schocken (1991) ”Multilayer Feedforward Networks with Non-Polynomial Activation Functions Can Approximate Any Function”

slide-20
SLIDE 20

Overfitting

2 4 6 8 10 12 14 16 18 20

  • 15
  • 10
  • 5

5 10 15 20 25 30

Degree 15 polynomial

slide-21
SLIDE 21

n

Reduce number of features or size of the network

n Regularize n Early stopping: stop training updates once loss

increases on hold-out data

Avoiding Overfitting

θ

slide-22
SLIDE 22

n Function approximation through supervised learning

BUT: where do the supervised examples come from?

Status

slide-23
SLIDE 23
slide-24
SLIDE 24

Value Iteration with Function Approximation

n Initialize by choosing some setting for n Iterate for i = 0, 1, 2, …, H:

n Step 0: Pick some (typically ) n Step 1: Bellman back-ups n Step 2: Supervised learning

find as the solution of:

S0 ⊆ S |S0| << |S|

θ(0)

min

θ

X

s∈S0

⇣ ˆ Vθ(i+1)(s) − ¯ Vi+1(s) ⌘2

θ(i+1)

∀s ∈ S0 : ¯ Vi+1(s) ← max

a

X

s0

T(s, a, s0) h R(s, a, s0) + γ ˆ Vθ(i)(s0) i

slide-25
SLIDE 25

n

Mini-tetris: two types of blocks, can only choose translation (not rotation)

n

Example state:

n

Reward = 1 for placing a block

n

Sink state / Game over is reached when block is placed such that part of it extends above the red rectangle

n

If you have a complete row, it gets cleared

Value Iteration w/Function Approximation --- Example

slide-26
SLIDE 26

Value Iteration w/Function Approximation --- Example

S’ = { , , , }

slide-27
SLIDE 27

S’ = { , , , }

n 10 features (also called basis functions) φi

n

Four basis functions, 0, . . . , 3, mapping the state to the height h[k] of each of the four columns.

n

Three basis functions, 4, . . . , 6, each mapping the state to the absolute difference between heights of successive columns: |h[k+1] − h[k]|, k = 1, . . . , 3.

n

One basis function, 7, that maps state to the maximum column height: maxk h[k]

n

One basis function, 8, that maps state to the number of ’holes’ in the board.

n

One basis function, 9, that is equal to 1 in every state.

n Init with θ(0) = ( -1, -1, -1, -1, -2, -2, -2, -3, -2, 10)

Value Iteration w/Function Approximation --- Example

slide-28
SLIDE 28

n Bellman back-ups for the states in S’:

V( ) = max {0.5 *(1+ γ V( )) + 0.5*(1 + γ V( ) ) , 0.5 *(1+ γ V( )) + 0.5*(1 + γ V( ) ) , 0.5 *(1+ γ V( )) + 0.5*(1 + γ V( ) ) , 0.5 *(1+ γ V( )) + 0.5*(1 + γ V( ) ) }

Value Iteration w/Function Approximation --- Example

slide-29
SLIDE 29

0.5 *(1+ γ V( )) + 0.5*(1 + γ V( ) ) }

n Bellman back-ups for the states in S’:

V( ) = max {0.5 *(1+ γ V( )) + 0.5*(1 + γ V( ) ) , 0.5 *(1+ γ V( )) + 0.5*(1 + γ V( ) ) , 0.5 *(1+ γ V( )) + 0.5*(1 + γ V( ) ) ,

Value Iteration w/Function Approximation --- Example

slide-30
SLIDE 30

S’ = { , , , }

n 10 features aka basis functions φi

n

Four basis functions, 0, . . . , 3, mapping the state to the height h[k] of each of the four columns.

n

Three basis functions, 4, . . . , 6, each mapping the state to the absolute difference between heights of successive columns: |h[k+1] − h[k]|, k = 1, . . . , 3.

n

One basis function, 7, that maps state to the maximum column height: maxk h[k]

n

One basis function, 8, that maps state to the number of ’holes’ in the board.

n

One basis function, 9, that is equal to 1 in every state.

n Init with θ(0) = ( -1, -1, -1, -1, -2, -2, -2, -3, -2, 10)

Value Iteration w/Function Approximation --- Example

slide-31
SLIDE 31

n Bellman back-ups for the states in S’:

V( ) = max { 0.5 *(1 + γ ( )) + 0.5 *(1 + γ ( ) ) , 0.5 *(1 + γ ( )) + 0.5 *(1 + γ ( ) ) , 0.5 *(1 + γ V ( )) + 0.5*(1 + γ V ( ) ) , 0.5 *(1 + γ ( )) + 0.5*(1 + γ ( ) ) }

(6,2,4,0, 4, 2, 4, 6, 0, 1) (6,2,4,0, 4, 2, 4, 6, 0, 1) (2,6,4,0, 4, 2, 4, 6, 0, 1) (2,6,4,0, 4, 2, 4, 6, 0, 1) (sink-state, V=0) (sink-state, V=0) (0,0,2,2, 0,2,0, 2, 0, 1) (0,0,2,2, 0,2,0, 2, 0, 1)

Value Iteration w/Function Approximation --- Example

θ>φ θ>φ θ>φ θ>φ θ>φ θ>φ

slide-32
SLIDE 32

n Bellman back-ups for the states in S’:

V( ) = max { 0.5 *(1 + γ ( -30 )) + 0.5 *(1 + γ ( -30 ) ) , 0.5 *(1 + γ ( -30 )) + 0.5 *(1 + γ ( -30 ) ) , 0.5 *(1 + γ ( 0 )) + 0.5*(1 + γ ( 0 ) ) , 0.5 *(1 + γ ( 6 )) + 0.5*(1 + γ ( 6 ) ) }

Value Iteration w/Function Approximation --- Example

= 6.4 (for γ = 0.9)

slide-33
SLIDE 33

n Bellman back-ups for the second state in S’:

V( ) = max { 0.5 *(1 + γ V ( )) + 0.5 *(1 + γ V ( ) ) , 0.5 *(1 + γ V ( )) + 0.5 *(1 + γ V ( ) ) , 0.5 *(1 + γ V ( )) + 0.5*(1 + γ V ( ) ) , 0.5 *(1 + γ ( )) + 0.5*(1 + γ ( ) ) }

(sink-state, V=0) (sink-state, V=0)

Value Iteration w/Function Approximation --- Example

θ>φ θ>φ

θ(0) = (−1, −1, −1, −1, −2, −2, −2, −3, −2, 20)

(sink-state, V=0) (sink-state, V=0) (sink-state, V=0) (sink-state, V=0) (0,0,0,0, 0,0,0, 0, 0, 1) (0,0,0,0, 0,0,0, 0, 0, 1)

  • > V = 20
  • > V = 20

= 19

slide-34
SLIDE 34

n Bellman back-ups for the third state in S’:

V( ) = max {0.5 * (1 + γ ( )) + 0.5 * (1 + γ ( ) ) , 0.5 *(1 + γ ( )) + 0.5 * (1 + γ ( ) ) , 0.5 *(1 + γ ( )) + 0.5 * (1 + γ ( ) ) }

(0,0,0,0, 0,0,0, 0, 0, 1) (0,0,0,0, 0,0,0, 0, 0, 1)

  • > V = 20
  • > V = 20

= 19

(2,4,4,0, 2,0,4, 4, 0, 1) (2,4,4,0, 2,0,4, 4, 0, 1)

  • > V = -14
  • > V = -14

(4,4,0,0, 0,4,0, 4, 0, 1) (4,4,0,0, 0,4,0, 4, 0, 1)

  • > V = -8
  • > V = -8

Value Iteration w/Function Approximation --- Example

θ(0) = (−1, −1, −1, −1, −2, −2, −2, −3, −2, 20)

θ>φ θ>φ θ>φ θ>φ θ>φ θ>φ

slide-35
SLIDE 35

V( ) = max { 0.5 * (1 + γ ( )) + 0.5 * (1 + γ ( ) ) , 0.5 * (1 + γ ( )) + 0.5 * (1 + γ ( ) ) , 0.5 * (1 + γ ( )) + 0.5 * (1 + γ ( ) ) }

(4,0,6,6, 4,6,0, 6, 4, 1) (4,0,6,6, 4,6,0, 6, 4, 1)

  • > V = -42
  • > V = -42

= -29.6

(4,6,6,0, 2,0,6, 6, 4, 1) (4,6.6,0, 2,0,6, 6, 4, 1)

  • > V = -38
  • > V = -38

(6,6,4,0, 0,2,4, 6, 4, 1) (6,6,4,0, 0,2,4, 6, 4, 1)

  • > V = -34
  • > V = -34

Value Iteration w/Function Approximation --- Example

θ(0) = (−1, −1, −1, −1, −2, −2, −2, −3, −2, 20)

n Bellman back-ups for the fourth state in S’:

θ>φ θ>φ θ>φ θ>φ θ>φ θ>φ

slide-36
SLIDE 36

min

θ (6.4 − θ>φ(

))2 +(19 − θ>φ( ))2 +(19 − θ>φ( ))2 +((−29.6) − θ>φ( ))2

n

After running the Bellman back- ups for all 4 states in S’ we have:

V( )= 6.4 V( )= -29.6 V( )= 19 V( )= 19

n

We now run supervised learning on these 4 examples to find a new θ: Running least squares gives:

(2,2,4,0, 0,2,4, 4, 0, 1) (4,4,4,0, 0,0,4, 4, 0, 1) (2,2,0,0, 0,2,0, 2, 0, 1) (4,0,4,0, 4,4,4, 4, 0, 1)

θ(1) = (0.195, 6.24, −2.11, 0, −6.05, 0.13, −2.11, 2.13, 0, 1.59)

Value Iteration w/Function Approximation --- Example

slide-37
SLIDE 37

Value Iteration with Neural Net Function Approximation

n Initialize by choosing some setting for n Iterate for i = 0, 1, 2, …, H:

n Step 0: Pick some (typically ) n Step 1: Bellman back-ups n Step 2: Supervised learning

find as the solution of: To avoid overfitting: only small number of gradient updates on objective

  • r early stopping based on hold-out set

S0 ⊆ S

|S0| << |S|

θ(0)

min

θ

X

s∈S0

⇣ ˆ Vθ(i+1)(s) − ¯ Vi+1(s) ⌘2

θ(i+1)

∀s ∈ S0 : ¯ Vi+1(s) ← max

a

X

s0

T(s, a, s0) h R(s, a, s0) + γ ˆ Vθ(i)(s0) i

slide-38
SLIDE 38

Potential Guarantees?

slide-39
SLIDE 39

n We’ll consider the following varation on the algorithm:

n Assume we iterate over:

n VI back-up for ALL states n Function approximation

Note: For ALL states is not practical (that’s why we do function approximation). But (i) it’s helpful to theoretically think through things; (ii) if we have a negative result, it’s an even stronger negative result Theoretical Analysis of Value Iteration + Function Approximation

slide-40
SLIDE 40
slide-41
SLIDE 41

Simple Example

Function approximator: [1 2] * θ θ 2θ

x1 x2

r=0 r=0

slide-42
SLIDE 42

Simple Example

slide-43
SLIDE 43

n Definition. An operator G is a non-expansion with respect to a

norm || . || if

n Fact. If the operator F is a γ-contraction with respect to a norm

|| . || and the operator G is a non-expansion with respect to the same norm, then the sequential application of the operators G and F is a γ-contraction, i.e.,

n Corollary. If the supervised learning step is a non-expansion,

then value iteration with function approximation is a γ- contraction, and in this case we have a convergence guarantee.

Composing Operators

slide-44
SLIDE 44

n Examples:

n nearest neighbor (aka state aggregation) n linear interpolation over triangles (tetrahedrons, …)

Averager Function Approximators Are Non-Expansions

slide-45
SLIDE 45
slide-46
SLIDE 46

Averager Function Approximators Are Non-Expansions

slide-47
SLIDE 47

n

I.e., if we pick a non-expansion function approximator which can approximate J* well, then we obtain a good value function estimate.

n

To apply to discretization: use continuity assumptions to show that J* can be approximated well by chosen discretization scheme

Guarantees for Fixed Point

slide-48
SLIDE 48
slide-49
SLIDE 49

Example taken from Gordon, 1995

Linear Regression L

slide-50
SLIDE 50

n Function approximation n Value iteration with function approximation n Policy iteration with function approximation n Linear programming with function approximation

Outline

slide-51
SLIDE 51

Policy Iteration

n

Repeat until policy converges

n

At convergence: optimal policy; and converges faster under some conditions

One iteration of policy iteration:

Insert Function Approximation Here

slide-52
SLIDE 52

n IF we do weighted linear regression, weighted by the state

visitation frequencies under the current policy

n THEN the resulting projection is a contraction w.r.t. the

weighted 2-norm

n Policy Evaluation Bellman update is a contraction w.r.t. the same

norm à Guaranteed convergence J J J

n Want to see the math:

https://inst.eecs.berkeley.edu/~cs294-40/fa08/scribes/lecture5.pdf

Approximate Policy Evaluation is a Contraction!

slide-53
SLIDE 53
slide-54
SLIDE 54

Extra Intermezzo on Incompatible Norms

slide-55
SLIDE 55

n Towards Characterizing Divergence in Deep Q-Learning,

Joshua Achiam, Ethan Knight, Pieter Abbeel. arXiv 1903.08894

Recent Related Paper**

slide-56
SLIDE 56

n Function approximation n Value iteration with function approximation n Policy iteration with function approximation n Linear programming with function approximation

Outline

slide-57
SLIDE 57
slide-58
SLIDE 58

μ0 is a probability distribution over S, with μ0(s)> 0 for all s in S.

Infinite Horizon Linear Program**

  • Theorem. V* is the solution to the above LP.

min

V

X

s2S

µ0(s)V (s) s.t. V (s) ≥ X

s0

T(s, a, s0) [R(s, a, s0) + γV (s0)] , ∀s ∈ S, a ∈ A

slide-59
SLIDE 59

Let , and consider S’ rather than S:

Infinite Horizon Linear Program**

V (s) = θ>φ(s)

min

θ

X

s2S0

µ0(s)θ>φ(s) s.t. θ>φ(s) ≥ X

s0

T(s, a, s0) ⇥ R(s, a, s0) + γθ>φ(s0) ⇤ , ∀s ∈ S0, a ∈ A

ˆ Vθ(s) = θ>φ(s)

min

V

X

s2S

µ0(s)V (s) s.t. V (s) ≥ X

s0

T(s, a, s0) [R(s, a, s0) + γV (s0)] , ∀s ∈ S, a ∈ A

We find approximate value function

slide-60
SLIDE 60

n

LP solver will converge

n

Solution quality: [de Farias and Van Roy, 2002] Assuming one of the features is the feature that is equal to one for all states, and assuming S’=S we have that: (slightly weaker, probabilistic guarantees hold for S’ not equal to S, these guarantees require size of S’ to grow as the number of features grows)

Approximate Linear Program – Guarantees**

min

θ

X

s2S0

µ0(s)θ>φ(s) s.t. θ>φ(s) ≥ X

s0

T(s, a, s0) ⇥ R(s, a, s0) + γθ>φ(s0) ⇤ , ∀s ∈ S0, a ∈ A kV ∗ Φθk1,µ0  2 1 γ min

θ

kV ∗ Φθk∞