Impulse Control Inputs and the Theory of Fast Feedback Control A. - - PowerPoint PPT Presentation

impulse control inputs and the theory of fast feedback
SMART_READER_LITE
LIVE PREVIEW

Impulse Control Inputs and the Theory of Fast Feedback Control A. - - PowerPoint PPT Presentation

Impulse Control Inputs and the Theory of Fast Feedback Control A. N. Daryin and A. B. Kurzhanski Moscow State (Lomonosov) University Faculty of Computational Mathematics and Cybernetics IFAC World Congress, 2008 Introduction Impulse controls:


slide-1
SLIDE 1

Impulse Control Inputs and the Theory of Fast Feedback Control

  • A. N. Daryin and A. B. Kurzhanski

Moscow State (Lomonosov) University Faculty of Computational Mathematics and Cybernetics

IFAC World Congress, 2008

slide-2
SLIDE 2

Introduction

Impulse controls: instantaneous control actions (”hits“) trajectories with discontinuities, jumps, resets, etc. Mechanical systems: Using ordinary δ-functions: gives jump in velocity Using higher derivatives of δ-functions: gives reset of all coordinates.

slide-3
SLIDE 3

Introduction

The emphasis in this paper is on: Higher-order distributions as control inputs (δ-function and its derivatives) Fast controls. Feedback control.

slide-4
SLIDE 4

The Impulse Control Problem

˙ x(t) = A(t)x(t) + B(t)u(t) t ∈ [t0, t1] — fixed time interval Problem (1, a Mayer–Bolza Analogy) Minimize J(U(·)) = Var

[t0,t1] U(·) + ϕ(x(t1 + 0))

  • ver U(·) ∈ BV [t0, t1] with x(t) generated by control input

u(t) = dU dt starting from x(t0 − 0) = x0.

slide-5
SLIDE 5

The Impulse Control Problem

Known result (N. N. Krasovski [1957], L. W. Neustadt [1964]): u(t) =

n

  • i=1

hiδ(t − τi) Important particular case: ϕ(x) = I (x | {x1}) — steer from x0 to x1 on [t0, t1]. I (x | A) =

  • 0,

x ∈ A; +∞, x ∈ A.

slide-6
SLIDE 6

The Value Function

Definition The minimum of J(U(·)) with fixed initial position x(t0 − 0) = x0 is called the value function: V (t0, x0) = V (t0, x0; t1, ϕ(·)). How to find the value function? Integrate the HJB equation. An explicit representation (convex analysis).

slide-7
SLIDE 7

The Dynamic Programming Equation

The value function V (t, x; t1, ϕ(·)) satisfies the Principle of Optimality V (t0, x0; t1, ϕ(·)) = V (t0, x0; τ, V (τ, ·; t1, ϕ(·))), τ ∈ [t0, t1] The value function it is the solution to the Hamilton–Jacobi–Bellman variational inequality: min {H1(t, x, Vt, Vx), H2(t, x, Vt, Vx)} = 0, V (t1, x) = V (t1, x; t1, ϕ(·)). H1 = Vt+Vx, A(t)x, H2 = min

u∈S1 Vx, B(t)u+1 = −

  • BT(t)Vx
  • +1.
slide-8
SLIDE 8

The Control Structure

(t, x) H1(t, x) = 0 H2(t, x) = 0 jump U(τ) = α · d · χ(τ − t) dU(t) = 0 wait choose jump direction d = −BTVx choose jump amplitude min α 0 : H1(t, x + αd) = 0

slide-9
SLIDE 9

The Explicit Formula

V (t0, x0) = inf

x1∈Rn

  • ϕ(x1) + sup

p∈Rn

p, x1 − X(t1, t0)x0 p[t0,t1]

  • .

The value function is convex and its conjugate equals V ∗(t0, p) = ϕ∗(X T(t0, t1)p) + I

  • X T(t0, t1)p
  • B·[t0,t1]
  • where p[t0,t1] =
  • BT(·)X T (t1, ·)p
  • C[t0,t1] and

∂X(t, τ) = A(t)X(t, τ), X(τ, τ) = I. See (Daryin, Kurzhanski, and Seleznev, 2005).

slide-10
SLIDE 10

The Generalized Impulse Control Problem

Problem (2) Minimize J(u) = ρ∗[u] + φ(x(t1 + 0))

  • ver distributions u ∈ Dk[α, β], (α, β) ⊇ [t0, t1] where x(t) is the

trajectory generated by control u starting from x(t0 − 0) = x0. Here ρ∗[u] is the conjugate norm to the norm ρ on C k[α, β]: ρ[ψ] = max

t∈[α,β]

  • ψ(t)2 + ψ′(t)2 + · · · +
  • ψ(k)(t)
  • 2.

u(t) =

n

  • i=1

h(0)

i

δ(t − τi) + h(1)

i

δ′(t − τi) + · · · + h(k)

i

δ(k)(t − τi).

slide-11
SLIDE 11

Reduction to the “Ordinary” Impulse Control Problem

How to deal with higher-order derivatives δ(j)(t)? Reduce to problem with ordinary δ-functions, but for a more complicated system. General form of distributions u ∈ Dk: u = dU0 dt + d2U1 dt2 + · · · + dkUk dtk Uj ∈ BV Problem 2 reduces to a particular case of Problem 1 for the system ˙ x = A(t)x + B(t)u, B(t) =

  • L0(t)

L1(t) · · · Lk(t)

  • and the control u = dU

dt , U(t) =    U0(t) . . . Uk(t)   , with L0(t) = B(t), Lj(t) = A(t)Lj−1(t) − L′

j−1(t).

slide-12
SLIDE 12

Reduction to the “Ordinary” Impulse Control Problem

B(t) =

  • L0(t)

L1(t) · · · Lk(t)

  • What does it look like?

For example: A = 0: B(t) =

  • B(t)

−B′(t) B′′(t) · · · (−1)kB(k)(t)

  • A, B = const:

B(t) =

  • B

AB A2B · · · Ak−1B

slide-13
SLIDE 13

Fast Controls

With rank B = n, system can be steered from x0 to x1 in zero time by an ideal control u(t) = h(0)δ(t − t0) + h(1)δ′(t − t0) + · · · + h(k)δ(k)(t − t0). i.e. x1 − x0 = L0(t0)h(0) + L1(t0)h(1) + · · · + Lk(t0)h(k). Approximations of ideal zero-time controls are Fast Controls. They steer the system in arbitrary small “fast” time (“nano” time).

slide-14
SLIDE 14

Fast Controls

−4 −2 2 4 −1 −0.5 0.5 1 t Approximation of δ(t) −4 −2 2 4 −1 −0.5 0.5 1 t Approximation of δ(1)(t) −4 −2 2 4 −2 −1 1 2 t Approximation of δ(2)(t) −4 −2 2 4 −2 2 t Approximation of δ(3)(t)

slide-15
SLIDE 15

Fast Controls

Problem with Fast Controls reduces to an impulse control problem ˙ x = A(t)x+Mσ(t)u, Mσ(t) =

  • M(0)

σ (t)

M(1)

σ (t)

· · · M(k)

σ (t)

  • with

M(j)

σ (t) =

t+kσ

t

X(t + kσ, τ)B(τ)∆j

σ(τ − t)dτ

∆0

σ(t) = 1

σ 1[0,σ](t), ∆j

σ(t) = 1

σ(∆j−1

σ

(t) − ∆j−1

σ

(t − σ)) We have Mσ(t) → B(t) as σ → 0.

slide-16
SLIDE 16

Examples — Oscillating Systems

k1 k2 m1 m2 w1 w2 kN mN−1 mN wN−1 wN F L1 C1 L2 C2 LN VCN

slide-17
SLIDE 17

Examples — Oscillating Systems

           m1 ¨ w1 = k2(w2 − w1) − k1w1 mi ¨ wi = ki+1(wi+1 − wi) − ki(wi − wi−1) mν ¨ wν = kν+1(wν+1 − wν) − kν(wν − wν−1) + u(t) mN ¨ wN = −kN(wN − wN−1) wi = wi(t) — displacements from the equilibrium mi — masses of the loads ki — stiffness coefficients u(t) = dU

dt — impulse control (U ∈ BV )

This system is completely controllable. For N = 20 springs, the dimension of the system is 2N = 40. Feedback control (all wi and ˙ wi measured).

slide-18
SLIDE 18

Feedback Control Structure for N = 1

−10 −8 −6 −4 −2 2 4 6 8 10 −10 −8 −6 −4 −2 2 4 6 8 10

jump down jump up wait wait

x1 x2 −10 −8 −6 −4 −2 2 4 6 8 10 −10 −8 −6 −4 −2 2 4 6 8 10 x1 x2

slide-19
SLIDE 19

Chain, N = 3, Control with Second Derivatives

slide-20
SLIDE 20

Chain, N = 5, Control with Second Derivatives

slide-21
SLIDE 21

String, N = 20, Ordinary Impulse Control

slide-22
SLIDE 22

String, N = 20, Control with Second Derivatives

slide-23
SLIDE 23

Application: Formalization of Hybrid Systems

  • ˙

x = A(t, z)x + B(t, z)u + Iu0 ˙ z = ud

  • x

∈ Rn z ∈ {0, 1, . . . , N} u = u(t, x) — the online control u0 = u0(t, x, z) — resetting the state space vector u0(t, x, z) =

n−1

  • j=0

αj(t, x, z(t − 0))δ(i)(f (x, z)) ud = ud(x, z) = β(x, z(t − 0))δ(fd(x, z)) — resetting the subsystem from k′ to k′′ (β(x, z(t − 0)) = (k′′ − k′)) f0(x, z) = 0 fd(x, z) = 0 — switching surfaces

slide-24
SLIDE 24

State Space of a Hybrid System

slide-25
SLIDE 25

References

Bensoussan, A. and J.-L. Lions. Contrˆ

  • le impulsionnel et in´

equations quasi-variationnelles. Dunod, Paris, 1982. Daryin, A. N. and A. B. Kurzhanski. Generalized functions of high order as feedback controls. Differenc. Uravn., 43(11), 2007. Daryin, A. N., A. B. Kurzhanski, and A. V. Seleznev. A dynamic programming approach to the impulse control synthesis problem. In Proc. Joint 44th IEEE CDC-ECC 2005, Seville, 2005. IEEE. Dykhta, V. A. and O. N. Samsonuk. Optimal impulsive control with

  • applications. Fizmatlit, Moscow, 2003.

Gelfand, I. M. and G. E. Shilov. Generalized Functions. Academic Press, N.Y., 1964. Krasovski, N. N. On a problem of optimal regulation. Prikl. Math. & Mech., 21(5):670–677, 1957. Krasovski, N. N. The Theory of Control of Motion. Nauka, Moscow, 1968. Kurzhanski, A. B. On synthesis of systems with impulse controls. Mechatronics, Automatization, Control, (4):2–12, 2006. Kurzhanski, A. B. and Yu. S. Osipov. On controlling linear systems through generalized controls. Differenc. Uravn., 5(8):1360–1370, 1969.

slide-26
SLIDE 26

References

Kurzhanski, A. B. and I. V´

  • alyi. Ellipsoidal Calculus for Estimation and
  • Control. SCFA. Birkh¨

auser, Boston, 1997. Kurzhanski, A. B. and P. Varaiya. Ellipsoidal techniques for reachability

  • analysis. Internal approximation. Systems and Control Letters, 41:

201–211, 2000. Lancaster, P. Theory of Matrices. Academic Press, N.Y., 1969. Miller, B. M. and E. Ya. Rubinovich. Impulsive Control in Continuous and Discrete-Continuous Systems. Kluwer, N.Y., 2003. Neustadt, L. W. Optimization, a moment problem and nonlinear

  • programming. SIAM Journal on Control, 2(1):33–53, 1964.

Riesz, F. and B. Sz-.Nagy. Le¸ cons d’analyse fonctionnelle. Akad´ emiai Kiad´

  • , Budapest, 1972.

Schwartz, L. Th´ eorie des distributions. Hermann, Paris, 1950. Seidman, Th. I. and J. Yong. How violent are fast controls? II. Math. of Control, Signals, Syst., (9):327–340, 1997.