The Problem of Output Measurement Feedback Control Under Set-valued - - PowerPoint PPT Presentation

the problem of output measurement feedback control under
SMART_READER_LITE
LIVE PREVIEW

The Problem of Output Measurement Feedback Control Under Set-valued - - PowerPoint PPT Presentation

The Problem of Output Measurement Feedback Control Under Set-valued Uncertainty : from Theory to Computation A.B.KURZHANSKI (Moscow State Univ. and Univ. of California at Berkeley) Presentation at 44-th IEEE CDC and 28-th Chinese National


slide-1
SLIDE 1

The Problem of Output Measurement Feedback Control Under Set-valued Uncertainty : from Theory to Computation

A.B.KURZHANSKI (Moscow State Univ. and Univ. of California at Berkeley) Presentation at 44-th IEEE CDC and 28-th Chinese National Control Conference Shanghai, China, December 17, 2009

1

slide-2
SLIDE 2

OUTLINE

  • 1. Motivations
  • 2. The Basic Problem. The Separation Property.
  • 2. The GSE Problem of Guaranteed (Set-Membership) State Estimation
  • 3. The GCS Problem OF Guaranteed Control Synthesis
  • 4. Combination of GSE AND GCS: the Solution Strategy
  • 5. Systems with Linear Structure:

the system and its reconfiguration

  • 6. Linear Systems : the Solution Scheme,

reduction to finite-dimensions

  • 7. Calculation: the Ellipsoidal and Polyhedral Techniques
  • 8. Conclusion

2

slide-3
SLIDE 3

MOTIVATIONS

3

slide-4
SLIDE 4

starting point end point reachability set flow, 0 ≤ v ≤ vt measurements

4

slide-5
SLIDE 5

Team Control Synthesis Complete measurements

W [t0] = W (t0,t1,M ) M W [·]

Container

Safety zone Br

  • x(i)(t)
  • Safety set

5

slide-6
SLIDE 6

6

slide-7
SLIDE 7

The System Equations and the Uncertainties

The uncertain system : dx dt = f1(t,x,u)+ f2(t,x,v), x ∈ Rn, t ∈ [t0,ϑ] (1) with continuous right-hand sides satisfying conditions of uniqueness an extendibility of solutions. hard bounds on control u and unknown disturbance v(t): u ∈ P(t), v(t) ∈ Q (t), (2)

P(t),Q (t) — compact sets in Rp,Rq,

Hausdorff-continuous.

7

slide-8
SLIDE 8

Measurement equation: y(t) = h(t,x)+ξ(t), y ∈ Rm, (3) measurements — y(t), t ∈ T – (continuous or discrete) disturbance in measurement ξ(t) — unknown but bounded: ξ(t) ∈ R (t), t ∈ [t0,ϑ], (4)

R (t) — similar to P(t), h(t,x) — continuous.

Initial condition: x(t0) ∈ X0, (5) X0 — compact. Starting Position: {t0,X0}

8

slide-9
SLIDE 9

9

slide-10
SLIDE 10

BASIC PROBLEM STEER SYSTEM dx dt = f1(t,x,u)+ f2(t,x,v), x ∈ Rn, t ∈ [t0,ϑ], (1) y(t) = h(t,x)+ξ(t), ; y ∈ Rm, (3)

from starting position{t0,X 0} to terminal position {ϑ,M }, by feedback control strategy U(t,·),

  • n the basis of available information:
  • system model : equations (1), (3),
  • starting position{t0,X 0},
  • available measurement y(t),
  • given constraints on control u and

uncertain disturbance inputs v(t),ξ(t)

10

slide-11
SLIDE 11

What should the NEW STATE of the SYSTEM be ?

*** Classical case under complete information:

Position (state) – {t,x} – single valued Closed-loop control : {u(t,x)} Trajectories – single-valued : x[t] = x(t,t0,x0). *** Output feedback control under incomplete information: with – set-valued bounds (no statistical data available): Position (state) – set-valued: X [t]

11

slide-12
SLIDE 12

On-line set-valued position (NEW STATE) of the system may be taken as: * {t,yt(·)} — memorize measurements, (in stochastic control this is done through observers and filters (Kalman)) ** {t,X [t]} — find set-valued information set consistent with measurements and constraints on uncertain items: find set-valued information tubes *** {t,V(t,·)} — find information state – function V(t,x) such that X [t] = {x : V(t,x) ≤ α} is the level set of V(t,x), (found through Hamilton-Jacobi-Bellman (HJB) PDE equations).

12

slide-13
SLIDE 13

Guaranteed State Estimation under Set-membership noise

t = t0 t = τ measurements

1 2 3

Open-loop reachability tube

X (τ)

information set

13

slide-14
SLIDE 14

Problem I of Measurement Output Feedback Control:

Specify feedback strategy (closed-loop controls) U(t,X [t]) or U(t,V(t,·)) which steers overall system FROM any starting position {τ,X [τ]}, τ ∈ [t0,ϑ] TO given neighborhood Mµ of target set M at time ϑ :

{τ,X [τ]} → {ϑ,X [τ]}, X [ϑ] ⊆ Mµ despite unknown disturbances and incomplete measurements.

ATTENTION for MATHEMATICIANS: U = {U(t,X [t])} must ensure the existence and extendability of solutions to differential inclusion ˙ x ∈ f1(t,x,U(t,X [t]))+ f2(t,x,v), within interval t ∈ [t0,ϑ], whatever be v(t).

14

slide-15
SLIDE 15

(Measurement) Output Feedback Control

Closed-loop (feedback) control strategies: U(t,X ), U(t,V(t,·)), with state {t,X }, or {t,V(t,·)}, and trajectories –set-valued: X [t] = X (t,t0,X 0)

  • r single valued x[t], with set-valued error-bound R [t],

with state {t,x[t],Ω[t]} (external estimate E[t] ⊇ R [t]). trajectories x[t] = x(t,t0,x0), error bounds Ω[t] = Ω(t,t0,X 0 −x0).

15

slide-16
SLIDE 16

REMARK: Problem I may be separated into: Problem GSE of guaranteed state estimation(finite-dimensional) and Problem GCS of guaranteed control synthesis (infinite-dimensional) OUR AIM : (a) Find possibility of solutions while avoiding infinite-dimensional schemes. (b) Design feasible computational methods.

16

slide-17
SLIDE 17

17

slide-18
SLIDE 18

18

slide-19
SLIDE 19

SOLUTION METHODS

(a) GENERAL METHOD: the HAMILTON-JACOBI-BELLMAN (HJB) EQUATIONS (b) USING INVARIANT SETS and AIMING METHODS SET-VALUED CALCULUS+ NONLINEAR ANALYSIS FOR LINEAR SYSTEMS: CONVEX ANALYSIS (c) THE H-INFINITY APPROACH (d) APPROXIMATE METHODS: THE COMPARISON PRINCIPLE, DISCRETIZATION METHODS (e) COMPUTATION METHODS FOR LINEAR SYSTEMS: ELLIPSOIDAL CALCULUS, POLYHEDRAL CALCULUS or BOTH (f) INTERTWINING THE ABOVE METHODS

19

slide-20
SLIDE 20

Problem GSE of Guaranteed State Estimation The One-Stage Problem NOTE THAT THERE IS WORST CASE NOISE and BEST CASE NOISE

20

slide-21
SLIDE 21

y = x+ξ, ξ ∈ R noise bound R

worst case

c∗

2

c∗

1

routine

c∗

2

c∗

1

best case

21

slide-22
SLIDE 22

Examples: nonlinear maps

   x(k +1) = f(x(k)) y(k +1) = Gx(k +1)+ξ Take x ∈ R2, f(x) =   ax1 x2

2

ax2

1

x2  ,

X (k) = {x ∈ R2 : |xi| ≤ 1;i = 1,2},

y(k +1) = x2(k)+ξ, |ξ| ≤ µ,

XY(k +1) = {x : x ∈ [y(k +1)+µ,y(k +1)−µ]}, X (k +1) = f(X (k))∩XY(k +1) XY

f(X (k))

X (k)

22

slide-23
SLIDE 23

Nonlinear Examples

   x(k +1) = f(x(k)) y(k +1) = a2x2

1 +b2x2 2 +ξ

|ξ| ≤ ε

X (k) = {x ∈ R2 : |xi| ≤ 1;i = 1,2} XY(k +1) = {x : x ∈ [y(k +1)−ε,y(k +1)+ε]} X (k +1) = f(X (k))∩XY(k +1)

disconnected

X (k +1)

23

slide-24
SLIDE 24

convex hull

24

slide-25
SLIDE 25

Unkown but bounded noise

(i)Measurements – at given time (continuous or discrete). Noise – unknown, with given bounds.

Has a worst case when W [t] is largest possible and a best case when W [t] may even reduce to a point

(ii) Measurements arrive at random instants of time, due to distribution of

  • Poisson. Noise - with given bounds and given probabilistic density.

With stochastic noise the worst and best cases arrive with probability

  • zero. The statistical estimates of x are consistent.

25

slide-26
SLIDE 26

The Dynamics of the Information Set

t∗ and t∗ are the instants of discrete observations

26

slide-27
SLIDE 27

Problem GSE of Guaranteed (“Minmax”) State Estimation

Problem GSE may be formulated in two versions - E1 and E2 Problem E1: Given are equations dx dt = f1(t,x,u)+ f2(t,x,v), y(t) = h(t,x)+ξ(t) (i) position {t0,X 0}, used control u[s],s ∈ [t0,τ), measurement y = y∗(t), t ∈ [t0,τ], and constraints u ∈ P, v ∈ Q , ξ ∈ R (ii) with P,Q ,R given. Specify information set X [τ], of solutions x(τ) to system (i), consistent with system equations, measurement y∗(t), t ∈ [t0,τ] and constraints (ii). The information set X [τ] is the guaranteed estimate of x(τ).

27

slide-28
SLIDE 28

X 0

x(τ) ∈ X [τ] x(τ) ∈ X [τ] min

v d(x(t0),X 0) = 0

min

v d(x(t0),X 0) > 0

V(τ,x) > 0 V(τ,x) = 0 V(τ,x) = min

v d(x(t0),X 0)

28

slide-29
SLIDE 29

It is necessary not only to calculate set X [τ], but to arrange

  • n-line calculations , following the evolution of X [t] in time.!!!

This leads to the problem of DYNAMIC OPTIMIZATION:

Problem E2 Given starting position {t0,X 0}, and realization y∗(s), s ∈ [t0,τ], Find value function: V(τ,x) = min

v {d(x(t0),X 0) |v(t) ∈ Q (t),t ∈ [t0,τ]}

due to equation (1), under additional conditions x(τ) = x; y∗(s)−h(s,x(s)) ∈ R (s), s ∈ [t0,τ]. The last condition is actually an on-line state constraint

29

slide-30
SLIDE 30

The following relation is true

X [t] = {x : V(t,x) ≤ 0} !!!

The value function V(t,x) may be found by solving an HJB equation!

Introduce notation V(τ,x) = V(τ,x|V(t0,·)), Then the principle of optimality for problem GSE reads: V(τ,x|V(t0,·)) = V(τ,x|V(t,·|V(t0,·))), t0 ≤ t ≤ τ. (!) This allows to derive an HJB (Dynamic Programming) equation, to calculate V(t,x).

30

slide-31
SLIDE 31

The HJB equation:

∂V ∂t +max

v

∂V ∂x , f1(t,x,u∗(t))+ f2(t,x,v)

−d2(y∗(t)−h(t,x),R (t))

  • v(t) ∈ Q (t)
  • = 0,

under boundary condition V(t0,x) = d2(x,X 0).

Discretized scheme: X [t +σ] ∼ X [t +σ−0]∩Y(t +σ)

31

slide-32
SLIDE 32

The Dynamics of the Information Set

t∗ and t∗ are the instants of discrete observations

32

slide-33
SLIDE 33

Problem GCS of Guaranteed Synthesizing Control The “motion” of the evolving system is given by either the tube X [τ] or the function V(τ,·). Problem GCS. Find value function

V (τ,V(τ,·)) = minu maxy

  • d2(x[ϑ],M )
  • u ∈ U, y(·) ∈ Y(·,u)
  • ver closed-loop controls and all predicable ”future” tubes

Y(·,u) = Y(ϑ,τ;X [τ],u).

33

slide-34
SLIDE 34

Value function V (τ,x) = V (τ,V(τ,·)) satisfies the (infinite-dimensional)

Principle of Optimality in metric space of functions V(·):

V (τ,V(τ,·)) = V (τ,V(τ,·)|ϑ,V (ϑ,·))

Finding V (t,V(t,·)) produces the solution strategy u = u0(t,V(t,·)) ∈ U

But to find V (τ,V(τ,·)) one would have to solve a PDE in the space of functions rather than in finite dimensions.

34

slide-35
SLIDE 35

The solution strategy u = u0(t,V(t,·)) ∈ U guarantees condition

V (τ,V(τ,·)) ≤ max

y

max

x {d2(x,M ) |V(ϑ,x|V(τ,·)) ≤ 0} | u ∈ U; y(·) ∈Y(τ,u)}

for any strategy u = u(t,V(t,·)) ∈ U. Note that V(t,·) are the ”motions” of the formal evolution of

X [t] = {x : V(t,x) ≤ 0} – the on-line STATE of the system.

35

slide-36
SLIDE 36

A straightforward application of the DP approach may demand a heavy computational burden, however there are promising approaches, such as level set methods, the comparison principle, discretization techniques and others BUT DO NOT HURRY TO DISCARD DP:(!!!) IN CASE of LINEAR SYSTEMS and QUITE A NUMBER OF NONLINEAR THE EXACT SOLUTION MAY BE REACHED WITHOUT INFINITE–DIMENSIONAL PDE’s, BUT ONLY THROUGH FINITE-DIMENSIONAL SCHEMES(!!!) The computations are of course all designed within finite-dimensional schemes.

36

slide-37
SLIDE 37

37

slide-38
SLIDE 38

38

slide-39
SLIDE 39
  • II. Linear Systems under Hard Bounds

The uncertain linear system : dx/dt = A(t)x+B(t)u+C(t)v(t),(L1) with continuous matrix coefficients A(t),B(t),C(t) hard bounds on control u and disturbance v(t): u ∈ P(t), v(t) ∈ Q (t), t ∈ [t0,ϑ]

P(t),Q (t) — convex compact sets in Rp,Rq,

Hausdorff-continuous.

39

slide-40
SLIDE 40

Measurement equation: y(t) = H(t)x+ξ(t), rankH = m,(L2) disturbance ξ(t) — unknown but bounded: ξ(t) ∈ R (t), t ∈ [t0,ϑ],

R (t) – convex, compact, Hausdorff-continuous; H(t) — continuous.

Initial condition: x(t0) ∈ X 0,

X 0 — convex compact.

Starting Position: {t0,X 0}

40

slide-41
SLIDE 41

Problem GCS of Output Feedback Control:

Specify feedback control strategy U(t,X [t]) or U(t,V(t,·)) which steers overall system FROM any starting position {τ,X [τ]}, τ ∈ [t0,ϑ] TO given neighborhood Mµ of target set M at time ϑ : {τ,X [τ]} → {ϑ,X [ϑ]}, X [ϑ] ⊆ Mµ despite unknown disturbances and incomplete measurements. NOTE: U = {U(t,X [t])} must ensure existence and extendability of solutions to differential inclusion ˙ x ∈ A(t)x+B(t)U(t,X [t])+C(t)v(t), within interval t ∈ [t0,ϑ], whatever be v(t).

41

slide-42
SLIDE 42

New coordinates to simplify calculations: – Take transformation x = G(t,ϑ)x where G(t,ϑ) is the fundamental transition matrix for the original homogeneous system (1), – make necessary changes, then return to original notations. Then ˙ x = B(t)u+C(t)v(t), y(t) = H(t)x+ξ(t), x(t0) ∈ X0 = X 0 under hard bounds of type (2), (4), (5).

42

slide-43
SLIDE 43

Rearrange last system as follows:

dx∗/dt = B(t)u, x∗(t0) = 0, (a) dω/dt = C(t)v(t), v(t) ∈ Q (t), ω(t0) ∈ X 0, (b) z(t) = H(t)ω+ξ(t), ξ(t) ∈ R (t) (c) x∗ +ω = x, z(t) = y(t)−H(t)

t

t0

B(s)u(s)ds = z(t).

With u = u∗(s), s ∈ [t0,t) given, there is a one-to-one mapping between y(s) and z(s). Define information set for system (a)-(c): Ω(t,·) = Ω[t]. Then

X [t] = x∗(t)+Ω[t]

43

slide-44
SLIDE 44

44

slide-45
SLIDE 45

45

slide-46
SLIDE 46

SOLUTION METHOD: combining HJB-techniques with calculating weakly invariant sets

The convex information sets X and information states V(t,x) (convex functions) may be calculated through HJB equations or convex analysis or their approximations. Here the control strategies are calculated by minimizing Hausdorff h+ semi distance between// N.N.Krasovski’s ”aiming techniques.” OTHER APPROACHES deal through discretization of the continuous solutions or through discretizing the problem from the beginning.

Comparative studies are necessary. Can we plug in the controls found through discretization into the continuous equations and what will be the error ???

46

slide-47
SLIDE 47

Complete measurements

ϑ

W [τ]

τ

M

ϑ

X

Guaranteed strategy ensures On-line state: {τ,x}

47

slide-48
SLIDE 48

Output feedback

WI[τ]

τ

M

ϑ

X

Guaranteed strategy ensures

If measurement error is too large W [τ] X M Mε

48

slide-49
SLIDE 49

Feedback control under complete measurements

v1 v2 v3 State {τ,x} τ ϑ Target set

M W [τ] = T[τ,ϑ]M

τ ϑ

W [τ] = T[τ,τ1]T[τ1,τ2]T[τ2,ϑ]M M

τ1 τ2 limit case WN[τ] → W [τ], N → ∞, σN = max

i

|τi+1 −τi| → 0

W [τ] →

invariant set

49

slide-50
SLIDE 50

Feedback control under Complete Measurements

STATE: {τ,x} INVARIANT SET: W [τ] = {x : ∀v(·), ∃u(·) → x(ϑ) ∈ M } CONTROLS: OPEN-LOOP

Invariant set

W [τ] = T[τ,ϑ]M

This is a linear map T : M [ϑ] → W [τ], calculated through convex analysis Under Matching Conditions:

P(t) ≡ αQ (t), α ∈ (0,1)

50

slide-51
SLIDE 51

TO FIND THE FEEDBACK CONTROL STRATEGY WE NEED THE INVARIANT SET !// THE CONTROL THEN ARRIVES FROM CONDITION:

U(t,x) =

  • u : max

v

  • dd2(x(τ),W [τ])

dt

  • u,v

≤ 0

  • NOTE THAT HERE WE HAVE TO SOLVE PROBLEMS IN

FINITE TIME THIS IS MUCH HARDER THAN SOLVING STABILIZATION PROBLEMS (compare Value functions with control Liapunov functions: for linear-quadratic control problems in infinite time the Liapunov function is a value function)

51

slide-52
SLIDE 52

The invariant sets W are the backward reachability sets from set M

If W [τ] is calculated through open loop controls

Wc[τ] is calculated through closed-loop controls, then

Under Matching Conditions we have:

W [τ] = Wc[τ]

Without Matching Conditions we have:

W [τ] = Wc[τ]

W [τ] = T[τ,ϑ]M WN[τ] = T[τ,τ1]...T[τN,ϑ]M , WN[τ] → Wc[τ](N → ∞);

52

slide-53
SLIDE 53

Feedback control under Complete Measurements

STATE: {τ,x} NO matching conditions: Here τi are the points of DISCRETE MEASUREMENTS.

WN[τ] = T[τ,τ1]...T[τN,ϑ]M

CONTROLS: piecewise open-loop (compare with model-predictive controls) Limit case (with N → ∞, σN = max|τi+1 −τi| → 0) :

WN[τ] → Wc[τ] – INVARIANT SET

53

slide-54
SLIDE 54

Output Feedback Control (Incomplete Measurements)

ϑ

WI[τ]

τ

X X

ϑ τ

measurement for some noise ξ 54

slide-55
SLIDE 55

Output Feedback Control Incomplete Measurements

STATE: {τ,X [τ]},

Under Matching Conditions

Invariant Set:

WI[τ] = {X : ∀ ∈ X ,∀v(·), ∃u(·) x(τ) → x(ϑ) ∈ Msub ⊆ M } under

  • n-line state constraint

y(t)−G(t)x(t)+ξ(t) ∈ R (t) Calculated through CLOSED-LOOP CONTROLS

WI[τ] = ∪{X }, X = x+Ω

with

WI[τ] = T[τ,ϑ]M

55

slide-56
SLIDE 56

*** Output Feedback Control Incomplete Measurements STATE: {τ,X [τ]}, NO matching conditions DISCRETE measurements: WIN[τ] = TI[τ,τ1]...TI[τN,ϑ]M CONTINUOUS measurements: Limit case (with N → ∞, σN → 0)

WIN[τ] → WIc[τ] – INVARIANT SET

FROM PIECEWISE-CONTINUOUS SOLUTIONS TO FEEDBACK CONTROL SOLUTION STRATEGY

56

slide-57
SLIDE 57

REDUCTION to FINITE-DIMENSIONAL SCHEMES To find feedback controls we need to calculate d(x,W [t]) – under complete measurements (finite-dimensional scheme) h+(X [t],WI[t]) – under incomplete output measurements (in general - an infinite-dimensional scheme)

For linear systems with convex constraints we have:

WIc[t] = Wc[t]

Then, instead of h+(X [t],WIc[t]) we need h+(X [t],Wc[t])

Under Matching Conditions :

WIc[t] = Wc[t] = W [t]

This may be exactly calculated through finite-dimensional schemes

57

slide-58
SLIDE 58

W [τ]

h+(X ,W )

X [τ]

controlled directions selected direction Selection of control strategy

U(τ,x) =

  • u : d h+(X [τ],W [τ])

dτ ≤ 0

  • 58
slide-59
SLIDE 59

EQUATIONS FOR THE SYNTHESIZED SYSTEM The state of the system is {X [t] = x∗(t)+Ω[t]} We use the support function for set Ω[t] : ϕ(t,l) = ρ(l | Ω[t]) = max{(l ,x) | x ∈ Ω[t]} The evolution equations for x∗(t),Ω[t] are

  • ˙

x∗ = B(t)U(t,X ),

∂ρ(l | Ω[t]) ∂t

= Ψ(t,l,Ω[t],Z[t])

This a PDE for the support function ∂ρ(l | Ω[t]). (Z[t] is the measurement set).

59

slide-60
SLIDE 60

IF measurements are DISCRETE, at instants {τi}, then

Ω[τi+1] = Ω[τi+1 −0]∩Z[τi+1] ∂ρ(l | Ω[t])/∂t = ρ(l|B(t)Q (t)), t ∈ [τi,τi+1), Ω[τ0] = X 0.

Between measurements we calculate ”ordinary” reach set without state constraints. Support function ρ(l | Ω[t]) may be calculated exactly through Duality Theory of Convex Analysis

60

slide-61
SLIDE 61

But we need effective calculation for Large Dimensions This can be reached through ELLIPSOIDAL or POLYHEDRAL CALCULUS !

61

slide-62
SLIDE 62
  • III. The Solution Through Ellipsoidal Techniques

An ellipsoid (P > 0)

E(p,P) = {x : (x− p,P−1(x− p)) ≤ 1}

Its support function ρ(l|E(p,P) = (l,x)+(l,Pl)1/2 The target set M = E(m,M) Hard bounds: x(t0) ∈ E(x0,X0), u ∈ E(p(t),P(t)), f(t) ∈ E(q(t),Q(t)), ξ(t) ∈ E(0,R(t)). Here M = M′ > 0, X0 = X0′ > 0, and

P(t) = P(t)′ > 0, Q (t) = Q (t)′ > 0, R (t) = R ′(t) > 0.

62

slide-63
SLIDE 63

Stage 1. Solve Problem GSE : find information set W [τ]. This actually is the reach set of system ˙ w ∈ C(t)E(q(t),Q(t)), w(t0) ∈ E(x0,X0), under on-line state constraint z∗(t)−H(t)w(t) ∈ E(0,R(t)), t ∈ [t0,τ], with given z∗(t) . We present W [τ] through parametrized external ellipsoids!

W [τ] ⊆ E(w+(τ),W+(τ) |ω(τ)),

63

slide-64
SLIDE 64

64

slide-65
SLIDE 65

65

slide-66
SLIDE 66

66

slide-67
SLIDE 67

−0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 −1.5 −1 −0.5 0.5 1 1.5 x1 x

2

67

slide-68
SLIDE 68

68

slide-69
SLIDE 69

69

slide-70
SLIDE 70

70

slide-71
SLIDE 71

71

slide-72
SLIDE 72

72

slide-73
SLIDE 73

73

slide-74
SLIDE 74

−3.5 3.5 −3.5 3.5 x1 x2

74

slide-75
SLIDE 75

−3.5 3.5 2 −3.5 3.5 t x1 x2

75

slide-76
SLIDE 76

−3.5 3.5 −3.5 3.5 x1 x2

76

slide-77
SLIDE 77

−3.5 3.5 2 −3.5 3.5 t x1 x2

77

slide-78
SLIDE 78

−3.5 3.5 −3.5 3.5 x1 x2

78

slide-79
SLIDE 79

Exact ellipsoidal representations of X [τ]

Denote parameters χ+(τ) = {γu(·),S(·)}, χ−(τ) = {γ f (·),S1(·),S2(·)}

E(xe,X+(τ)) = E(xe,X+(τ);χ+(τ)), E(xe,X−(τ)) = E(xe,X−(τ);χ−(τ))

Then we have

  • Theorem. The following representation is true
  • E(xe,X−(τ))
  • χ−(τ)
  • = X ∗[τ] =
  • E(xe,X+(τ))
  • χ+(τ)
  • ,

79

slide-80
SLIDE 80

To Specify control strategy U0(τ,X ,W )

we need value function

V (τ,X ,W ) = h+(X ,W )

but use an ellipsoidal approximation

VEL(τ,X ,W ) = d(E(x(τ),X+(τ)),E(w(τ),W−(τ)) X (τ) ⊆ E(x(τ),X+(τ)), W [τ] ⊇ E(w(τ),W−(τ)

80

slide-81
SLIDE 81

Team Control Synthesis Complete measurements

W [t0] = W (t0,t1,M ) M W [·]

Container

Safety zone Br

  • x(i)(t)
  • Safety set

81

slide-82
SLIDE 82

Incomplete measurements

Safety zone:

Br

  • x(i)(t)
  • +X (i)(t) = X

(i)

B (t)

  • Safety set
  • information set
  • Total safety set

X

(i)

B (t)

82

slide-83
SLIDE 83

Team Control Synthesis

4 3 2 1

             y1 = Gx(1) +ξ1 y2 = (x(2) −x(1))+ξ2 y3 = (x(3) −x(2))+ξ3 y4 = (x(4) −x(3))+ξ4              ξ1 ≤ µ1 ξ2 ≤ µ2x(2) −x(1) ξ3 ≤ µ3x(3) −x(2) ξ4 ≤ µ4x(4) −x(3)             

X (1)(t) X (2)

=

X (1)(t)+X21(t) X (3)

=

X (2)(t)+X32(t) X (4)

=

X (3)(t)+X43(t)

small

2 4 1 3 3 2 1 4 2 3 1 4

Network of team members

83

slide-84
SLIDE 84

84

slide-85
SLIDE 85

85

slide-86
SLIDE 86

86

slide-87
SLIDE 87

87

slide-88
SLIDE 88

References

[1] Krasovski N. N. On theory of controllability and observability of linear dynamic systems / / Prikl. Math. & Mech. 1964. V. 28. N. 1. p. 3–14. In Russian. [2] Krasovski N. N. The theory of optimally controlled systems / / 50 years

  • f mechanics in USSR. V. 1. Moscow: Nauka, 1968. P. 179–244. In

Russian. [3] Krasovski N. N. Control and stabilization under lack of information / / Izvestiya RAN. Technial Cybernetics. 1993. N. 1. p. 148–151. In Rus- sian. [4] Krasovski A. N., Krasovski N. N. Control Under Lack of Information. Basel: Birkh¨ auser, 1984. 320 p. [5] Krasovski N. N., Subbotin A. I. Game-Theoretical Control Problems. N.Y.: Springer, 1998. 517 p. 87-1

slide-89
SLIDE 89

[6] ˚ Astr¨

  • m K. J. Introduction to Stochastic Control Theory. N.Y.: Academic

Press, 1970. [7] Fleming W. H., Rishel R. W. Deterministic and Stochastic Optimal Con-

  • trol. N.Y.: Springer, 1975. 222 p.

[8] Bertsekas D. P. Dynamic Programming and Stochastic Control. N.Y.: Academic Press, 1975. 397 p. [9] Liptser R. S., Shiryaev A. N. Statistics of random processes. Moscow: Nauka, 1974. 696 p. In Russian. [10] Bas ¸ar T., Bernhard P. H∞ Optimal Control and Related Minimax Design

  • Problems. SCFA. Basel: Birkh¨

auser, 2nd ed., 1995. 441 p. [11] James M. R., Baras J. S. Partially observed differential games, infinite- dimensional Hamilton–Jacobi–Isaacs equations and nonlinear H∞ con- trol / / SIAM Journal on Control an Optimization. 1996. V. 34. N. 4. p. 1342–1364. [12] Helton J. W., James M. R. Extending H∞ Control to Nonlinear Systems. Philadelphia: SIAM, 1999. 333 p. 87-2

slide-90
SLIDE 90

[13] Kurzhanski A. B. Differential games of observation / / Dokl. AN SSSR.

  • 1972. V. 207. N. 3. p. 527–530. In Russian.

[14] Kurzhanski A. B. Control and Observation under Uncertainty. Moscow: Nauka, 1977. 392 p. In Russian. [15] Kurzhanski A. B. The principle of optimality in measurement feedback control for linear systems / / Directions in Mathematical Systems Theory and Optimization / Eds. Rantzer A., Byrnes C. Berlin: Springer, 2003. P. 193–202. [16] Pontryagin L. S. Ordinary Differential Equations. Moscow: Nauka,

  • 1970. 331 p. In Russian.

[17] Kurzhanski A. B., V´ alyi I. Ellipsoidal Calculus for Estimation and Con-

  • trol. SCFA. Boston: Birkh¨

auser, 1997. 321 p. [18] Milanese M., Norton J., Piet-Lahanier H., Walter E. Bounding Approach to System Identification. N.Y.: Plenum Press, 1995. 565 p. 87-3

slide-91
SLIDE 91

[19] Kruzhkov S. N. Generalized solutions to multivariate first-order nonlin- ear equations / / Mat. Sbornik. 1966. V. 70(112). N. 3. p. 394–115. In Russian. [20] Crandall M. G., Evans L. C., Lions P.-L. Some properties of solutions

  • f Hamilton–Jacobi equations /

/ Transactions of American Mathematical

  • Society. 1984. V. 282. N. 2. p. 487–502.

[21] Fleming W. H., Soner H. M. Controlled Markov Processes and Viscosity

  • Solutions. N.Y.: Springer, 1993. 428 p.

[22] Bardi M., Capuzzo-Dolcetta I. Optimal Control and Viscosity Solutions

  • f Hamilton–Jacobi–Bellman Equations. Boston: Birkh¨

auser, 1997. 570 p. [23] Subbotin A. I. Generalized Solutions of First-Order PDE’s. The Dynamic Optimization Perspective. Boston: Birkh¨ auser, 1995. 312 p. [24] Clarke F. H., Ledyaev Y. S., Stern R. J., Wolenski P. R. Nonsmooth Analysis and Control Theory. N.Y.: Springer, 1998. 278 p. 87-4

slide-92
SLIDE 92

[25] Kurzhanski A. B., Varaiya P. On some nonstandard dynamic program- ming problems of control theory / / Variational Methods and Applications / Eds. Giannessi F., Maugeri A. N.Y.: Kluwer, 2004. P. 613–627. [26] Kurzhanski A. B., Nikonov O. I. Evolution equations for assemblies of trajectories of synthesized control systems / / Dokl. RAN. 1993. V. 333.

  • N. 5. p. 578–581. In Russian.

[27] Kurzhanski A. B., Filippova T. F. On the theory of trajectory tubes: a mathematical formalism for uncertain dynamics, viability and con- trol / / Advances in Nonlinear Dynamics and Control. Ser. PSCT 17. Boston: Birkh¨ auser, 1993. P. 122–188. [28] Isaacs R. Differential Games. N.Y.: Wiley, 1965. 384 p. [29] Bas ¸ar T., Olsder J. Dynamic Noncooperative Game Theory. N.Y.: Aca- demic Press, 1982. 430 p. [30] Rockafellar R. T., Wets R. J. Variational Analysis. Berlin: Springer,

  • 1998. 733 p.

87-5

slide-93
SLIDE 93

[31] Gusev M. I., Kurzhanski A. B. On optimization of control systems under constraints I, II / / Differenc. Uravn. 1971. V. 7. N. 9, 10. p. 1591–1602, 1789–1800. In Russian. [32] Demyanov V. F. Minimax: Directional Derivates. Moscow: Nauka, 1970. 420 p. In Russian. [33] N. N. Krasovski N. N. Rendezvous Game Problems. Springfield, VA: Nat. Tech. Inf. Serv., 1971. [34] Kurzhanski A. B. Pontryagin’s alternated integral and the theory of con- trol synthesis / / Proc. Steklov’s Math. Inst. 1999. V. 224. p. 234–248. In Russian. [35] Pontryagin L. S. Linear differential games of pursuit / / Mat. Sbornik.

  • 1980. V. 112 (154). N. 3 (7). p. 307–330. In Russian.

[36] Sethian J. A. Level Set Methods and Fast Marching Methods. Cambridge University Press, 1999. 378 p. [37] Osher S., Fedkiw R. Level Set Methods and Dynamic Implicit Surfaces. N.Y.: Springer, 2003. 273 p. 87-6

slide-94
SLIDE 94

[38] Chernousko F. L. State Estimation for Dynamic Systems. CRC Press,

  • 1994. 304 p.

[39] Kurzhanski A. B., Varaiya P. Ellipsoidal techniques for reachability anal-

  • ysis. Part I: External approximations. Part II: Internal approximations.

Box-valued constraints / / Optimization methods and software. 2002. V.

  • 17. p. 177–237.

[40] Kurzhanski A. B., Varaiya P. Reachability analysis for uncertain systems — the ellipsoidal technique / / Dynamics of continuous, discrete and im- pulsive systems. Ser. B. 2002. V. 9. N. 3. p. 347–367. [41] Kostousova E. K. Control synthesis via parallelotopes: optimization and parallel computations / / Optimization methods and software. 2001. V. 14.

  • p. 267–310.

87-7