derivatives for design and control with Jim and Simon review: - - PowerPoint PPT Presentation

โ–ถ
derivatives for design and control
SMART_READER_LITE
LIVE PREVIEW

derivatives for design and control with Jim and Simon review: - - PowerPoint PPT Presentation

derivatives for design and control with Jim and Simon review: serial manipulator end effector position 2 3 1 2 motor angles = 1 forward kinematics (FK) what is ? i.e., given


slide-1
SLIDE 1

derivatives for design and control

with Jim and Simon

slide-2
SLIDE 2

๐’š ๐œ„2 ๐œ„1 ๐œ„3

motor angles ๐œพ = ๐œ„1 โ‹ฎ ๐œ„๐ฟ end effector position ๐‘ฆ โˆˆ โ„2

review: serial manipulator

slide-3
SLIDE 3

forward kinematics (FK)

what is ๐’š ๐œพ ?

i.e., given joint angles ๐œพ, what is the corresponding tip position ๐’š?

๏ƒ  something like ๐’š ๐œพ = ๐‘ผ๐ฟ๐‘บ๐ฟ โ‹ฏ ๐‘ผ1๐‘บ1๐‘ท // some big analytic // expression with a bunch // of sin(๐œ„๐‘—)โ€™s and cos(๐œ„

๐‘˜)โ€™s

slide-4
SLIDE 4

inverse kinematics (IK)

what is ๐œพโˆ— เทฅ ๐’š ?

i.e., given joint target tip position เทฅ ๐’š, what is an optimal choice of joint angles ๐œพโˆ—?

slide-5
SLIDE 5
  • ption 0: solve analytically
slide-6
SLIDE 6
  • ption 1: use optimization

minimize a suitable objective

slide-7
SLIDE 7
  • ption 1a: derivative-free optimization

requires no derivatives

  • when in doubt just use CMA-ES
slide-8
SLIDE 8
  • ption 1b: derivative-based optimization

may require 1 derivative (gradient)โ€ฆ gradient descent may require 2 derivatives (gradient and Hessian)โ€ฆ Newtonโ€™s method

  • r be somewhere in the middleโ€ฆ

Gauss-Newton, L-BFGS

slide-9
SLIDE 9
  • ption 2: learn it

build a large set of training data {(๐œพ๐‘—, ๐’š๐‘—)} ๐‘—=1

๐‘‚

using forward kinematics, then train a deep net using tensor flow, and evaluate the deep net at ๐œพ

slide-10
SLIDE 10
  • ption 3: invert kinematics using the real world
slide-11
SLIDE 11
  • ption 1b: derivative-based optimization

Say the objective is ๐‘” ๐’š(๐’’) =

1 2 (๐’š ๐’’ โˆ’ เทฅ

๐’š)๐‘ˆ(๐’š ๐’’ โˆ’ เทฅ ๐’š) gradient is

๐‘’๐‘” ๐‘’๐’’ = ๐œ–๐‘” ๐œ–๐’š ๐‘’๐’š ๐‘’๐’’

// chain rule

๐œ–๐‘” ๐œ–๐’š = (๐’š ๐’’ โˆ’ เทฅ

๐’š) is trivial to compute and for a serial manipulator,

๐‘’๐’š ๐‘’๐’’ can be computed analytically

slide-12
SLIDE 12

But what if ๐’š ๐’’ does not have an analytic expression?

For example, static equilibrium of a finite element mesh: ๐’š ๐’’ = ๐‘๐‘ ๐‘• min

๐’š

๐น(๐’š, ๐’’) min

๐’’ ๐‘” ๐’š ๐’’

Still want to solve optimization problems of this form:

slide-13
SLIDE 13

An example: topology optimization

slide-14
SLIDE 14

Modeling continuous Relation between Parameters and State

  • Observation: when we set parameters ๐’’, we observe the state ๐’š as the result of simulation.
  • Although ๐’š are problem variables, they are not real DOFs โ€“ they are functions of the

parameters, i.e., ๐’š = ๐’š(๐’’)

  • Map from parameters to state is

๐’š = simulate(๐’’)

  • For design, we need derivatives of ๐’š(๐’’),

๐œ–๐‘” ๐œ–๐’’ = ๐œ–๐‘” ๐œ–๐’š d๐’š ๐‘’๐’’

  • But how to compute these derivatives,

๐‘’๐’š ๐‘’๐’’ = ๐‘’simulation ๐‘’๐’’

?

  • The derivative of an argmin...?

14

slide-15
SLIDE 15

Differentiating the Map

  • Although we can evaluate the map ๐’š โ†’ ๐’š(๐’’), this map is not available in closed-form

(i.e., analytically)

  • ๐’š โ†’ ๐’š(๐’’) requires minimizing a function, i.e., solving a system of nonlinear equations.
  • In general, it is impractical to compute derivatives of the minimization process.
  • But even though ๐’š โ†’ ๐’š(๐’’) is not given explicitly, the gradient of the objective

๐’‰ ๐’š, ๐’’ = ๐›‚๐ฒE =

๐’†๐น ๐’†๐’š = ๐Ÿ

provides this map implicitly.

15

slide-16
SLIDE 16

Differentiating the Map

  • Suppose that (๐’š,๐’’) is a feasible pair, i.e., ๐’‰(๐’š, ๐’’) = ๐Ÿ. In other words, ๐’š is an equilibrium

configuration for ๐’’.

  • If we apply a parameter perturbation ฮ”๐’’, the system will undergo displacements ฮ”๐’š such that

it is again in equilibrium, ๐’‰(๐’š + ฮ”๐’š, ๐’’ + ฮ”๐’’) = ๐Ÿ

  • Since this has to hold for arbitrary parameter variations, we have

๐‘’๐’‰ ๐‘’๐’’ = ๐œ–๐’‰ ๐œ–๐’š ๐‘’๐’š ๐‘’๐’’ + ๐œ–๐’‰ ๐œ–๐’’ = ๐Ÿ // total derivative

  • If the Jacobian ๐›ผ

๐’š๐’‰ is non-singular, we have ๐‘’๐’š ๐‘’๐’’ = โˆ’ ๐œ–๐’‰ ๐œ–๐’š โˆ’1 ๐œ–๐’‰ ๐œ–๐’’

16

slide-17
SLIDE 17

Sensitivity Analysis

  • Used in many applications to quantify the sensitivity of a solution with respect to parameters

(๐‘ป =

๐‘’๐’š ๐‘’๐’’ is also called the sensitivity matrix)

  • Widely for shape optimization, topology optimization, control, etc.

17

slide-18
SLIDE 18
  • ption 1b: derivative-based optimization

Say the objective is ๐‘” ๐’š(๐’’) =

1 2 (๐’š ๐’’ โˆ’ เทฅ

๐’š)๐‘ˆ(๐’š ๐’’ โˆ’ เทฅ ๐’š) gradient is

๐‘’๐‘” ๐‘’๐’’ = ๐œ–๐‘” ๐œ–๐’š ๐‘’๐’š ๐‘’๐’’

// chain rule

๐œ–๐‘” ๐œ–๐’š = (๐’š ๐’’ โˆ’ เทฅ

๐’š) is trivial to compute and for statically stable FEM (and for many, many other systems),

๐‘’๐’š ๐‘’๐’’ can be computed using sensitivity analysis

slide-19
SLIDE 19

application: soft IK

say the control input ๐’’ are the contacted lengths of cables in a soft robot... given a target pose เทฅ

๐’š, what is the optimal control ๐’’โˆ—?

๐‘” ๐’š(๐’’) = 1 2 (๐’š ๐’’ โˆ’ เทฅ ๐’š)๐‘ˆ๐‘น(๐’š ๐’’ โˆ’ เทฅ ๐’š)

slide-20
SLIDE 20

real-world robot user-specified target pose เทฅ ๐’š

  • ptim

imal control sig ignals ๐’’โˆ—

slide-21
SLIDE 21