Feedback: The simple and best solution. Applications to - - PowerPoint PPT Presentation

feedback
SMART_READER_LITE
LIVE PREVIEW

Feedback: The simple and best solution. Applications to - - PowerPoint PPT Presentation

Feedback: The simple and best solution. Applications to self-optimizing control and stabilization of new operating regimes Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Technology (NTNU) Trondheim


slide-1
SLIDE 1

1

Feedback:

The simple and best solution.

Applications to self-optimizing control and stabilization of new operating regimes

Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Technology (NTNU) Trondheim

WebCAST Feb. 2006

slide-2
SLIDE 2

2

Abstract

  • Feedback: The simple and best solution
  • Applications to self-optimizing control and stabilization of new operating regimes
  • Sigurd Skogestad, NTNU, Trondheim, Norway
  • Most chemical engineers are (indirectly) trained to be “feedforward thinkers"

and they immediately think of “model inversion'' when it comes doing control. Thus, they prefer to rely on models instead of data, although simple feedback solutions in many cases are much simpler and certainly more robust. The seminar starts with a simple comparison of feedback and feedforward control and their sensitivity to uncertainty. Then two nice applications of feedback are considered:

  • 1. Implementation of optimal operation by "self-optimizing control".

The idea is to turn optimization into a setpoint control problem, and the trick is to find the right variable to control. Applications include process control, pizza baking, marathon running, biology and the central bank of a country.

  • 2. Stabilization of desired operating regimes.

Here feedback control can lead to completely new and simple solutions. One example would be stabilization of laminar flow at conditions where we normally have turbulent flow. I the seminar a nice application to anti-slug control in multiphase pipeline flow is discussed.

slide-3
SLIDE 3

3

Outline

  • About Trondheim
  • I. Why feedback (and not feedforward) ?
  • II. Self-optimizing feedback control: What should we control?
  • III. Stabilizing feedback control: Anti-slug control
  • Conclusion
  • More information:
slide-4
SLIDE 4

4

Trondheim, Norway

slide-5
SLIDE 5

5

Trondheim

Oslo UK NORWAY DENMARK GERMANY

North Sea

SWEDEN Arctic circle

slide-6
SLIDE 6

6

NTNU, Trondheim

slide-7
SLIDE 7

7

Outline

  • About Trondheim
  • I. Why feedback (and not feedforward) ?
  • II. Self-optimizing feedback control: What should we control?
  • III. Stabilizing feedback control: Anti-slug control
  • Conclusion
slide-8
SLIDE 8

8

Example

G Gd

u d y Plant (uncontrolled system)

1 k=10

time

25

slide-9
SLIDE 9

9

G

Gd u d y

slide-10
SLIDE 10

10

Model-based control = Feedforward (FF) control G Gd

u d y ”Perfect” feedforward control: u = - G-1 Gd d Our case: G=Gd → Use u = -d

slide-11
SLIDE 11

11

G

Gd u d y

Feedforward control: Nominal (perfect model)

slide-12
SLIDE 12

12

G

Gd u d y

Feedforward: sensitive to gain error

slide-13
SLIDE 13

13

G

Gd u d y

Feedforward: sensitive to time constant error

slide-14
SLIDE 14

14

G

Gd u d y

Feedforward: Moderate sensitive to delay

(in G or Gd)

slide-15
SLIDE 15

15

Measurement-based correction = Feedback (FB) control

d

G

Gd u y

C

ys e

slide-16
SLIDE 16

16

Feedback PI-control: Nominal case

d

G

Gd u y

C

ys e

Input u Output y

Feedback generates inverse!

Resulting output

slide-17
SLIDE 17

17

G

Gd u d y

C

ys e

Feedback PI control: insensitive to gain error

slide-18
SLIDE 18

18

Feedback: insenstive to time constant error G

Gd u d y

C

ys e

slide-19
SLIDE 19

19

Feedback control: sensitive to time delay G

Gd u d y

C

ys e

slide-20
SLIDE 20

20

Comment

  • Time delay error in disturbance model (Gd): No effect (!) with

feedback (except time shift)

  • Feedforward: Similar effect as time delay error in G
slide-21
SLIDE 21

21

Conclusion: Why feedback? (and not feedforward control)

  • Simple: High gain feedback!
  • Counteract unmeasured disturbances
  • Reduce effect of changes / uncertainty (robustness)
  • Change system dynamics (including stabilization)
  • Linearize the behavior
  • No explicit model required
  • MAIN PROBLEM
  • Potential instability (may occur “suddenly”) with time delay/RHP-zero
slide-22
SLIDE 22

22

Outline

  • About Trondheim
  • Why feedback (and not feedforward) ?
  • II. Self-optimizing feedback control: What should we control?
  • Stabilizing feedback control: Anti-slug control
  • Conclusion
slide-23
SLIDE 23

23

Optimal operation (economics)

  • Define scalar cost function J(u0,d)

– u0: degrees of freedom – d: disturbances

  • Optimal operation for given d:

minu0 J(u0,x,d)

subject to: f(u0,x,d) = 0 g(u0,x,d) < 0

slide-24
SLIDE 24

24

Estimate d and compute new uopt(d) Probem: Complicated and sensitive to uncertainty

”Obvious” solution: Optimizing control = ”Feedforward”

slide-25
SLIDE 25

25

Engineering systems

  • Most (all?) large-scale engineering systems are controlled using

hierarchies of quite simple single-loop controllers

– Commercial aircraft – Large-scale chemical plant (refinery)

  • 1000’s of loops
  • Simple components:
  • n-off + P-control + PI-control + nonlinear fixes + some feedforward

Same in biological systems

slide-26
SLIDE 26

26

In Practice: Feedback implementation

Issue: What should we control?

slide-27
SLIDE 27

27

Further layers: Process control hierarchy y1 = c ? (economics)

PID RTO MPC

slide-28
SLIDE 28

28

Implementation of optimal operation

  • Optimal solution is usually at constraints, that is, most of the degrees
  • f freedom are used to satisfy “active constraints”, g(u0,d) = 0
  • CONTROL ACTIVE CONSTRAINTS!

– Implementation of active constraints is usually simple.

  • WHAT MORE SHOULD WE CONTROL?

– We here concentrate on the remaining unconstrained degrees of freedom.

slide-29
SLIDE 29

29

Optimal operation

Cost J Controlled variable c c copt

  • pt

J Jopt

  • pt
slide-30
SLIDE 30

30

Optimal operation

Cost J Controlled variable c c copt

  • pt

J Jopt

  • pt

Two problems:

  • 1. Optimum moves because of disturbances d: copt(d)
  • 2. Implementation error, c = copt + n

d

n

slide-31
SLIDE 31

31

Effect of implementation error

BAD Good Good

slide-32
SLIDE 32

32

Self-optimizing Control

c=cs

  • Self-optimizing Control

– Self-optimizing control is when acceptable

  • peration (=acceptable loss) can be achieved

using constant set points (cs) for the controlled variables c (without the need for re-optimizing when disturbances occur).

  • Define loss:
slide-33
SLIDE 33

33

Self-optimizing Control – Marathon

  • Optimal operation of Marathon runner, J=T

– Any self-optimizing variable c (to control at constant setpoint)?

slide-34
SLIDE 34

34

Self-optimizing Control – Marathon

  • Optimal operation of Marathon runner, J=T

– Any self-optimizing variable c (to control at constant setpoint)?

  • c1 = distance to leader of race
  • c2 = speed
  • c3 = heart rate
  • c4 = level of lactate in muscles
slide-35
SLIDE 35

35

Self-optimizing Control – Marathon

  • Optimal operation of Marathon runner, J=T

– Any self-optimizing variable c (to control at constant setpoint)?

  • c1 = distance to leader of race (Problem: Feasibility for d)
  • c2 = speed (Problem: Feasibility for d)
  • c3 = heart rate (Problem: Impl. Error n)
  • c4 = level of lactate in muscles (Problem: Impl.error n)
slide-36
SLIDE 36

36

Self-optimizing Control – Sprinter

  • Optimal operation of Sprinter (100 m), J=T

– Active constraint control:

  • Maximum speed (”no thinking required”)
slide-37
SLIDE 37

37

Further examples

  • Central bank. J = welfare. u = interest rate. c=inflation rate (2.5%)
  • Cake baking. J = nice taste, u = heat input. c = Temperature (200C)
  • Business, J = profit. c = ”Key performance indicator (KPI), e.g.

– Response time to order – Energy consumption pr. kg or unit – Number of employees – Research spending Optimal values obtained by ”benchmarking”

  • Investment (portofolio management). J = profit. c = Fraction of

investment in shares (50%)

  • Biological systems:

– ”Self-optimizing” controlled variables c have been found by natural selection – Need to do ”reverse engineering” :

  • Find the controlled variables used in nature
  • From this possibly identify what overall objective J the biological system has

been attempting to optimize

slide-38
SLIDE 38

38

Candidate controlled variables c for self-optimizing control

Intuitive

1. The optimal value of c should be insensitive to disturbances (avoid problem 1) 2. Optimum should be flat (avoid problem 2 – implementation error). Equivalently: Value of c should be sensitive to degrees of freedom u. “Want large gain”

Charlie Moore (1980’s): Maximize minimum singular value when selecting temperature locations for distillation

slide-39
SLIDE 39

39

Mathematical: Local analysis

u cost J uopt c = G u

slide-40
SLIDE 40

40

Minimum singular value of scaled gain

Maximum gain rule (Skogestad and Postlethwaite, 1996): Look for variables that maximize the scaled gain (Gs) (minimum singular value of the appropriately scaled steady-state gain matrix Gs from u to c)

slide-41
SLIDE 41

41

Self-optimizing control: Recycle process

J = V (minimize energy)

Nm = 5

3 economic (steady- state) DOFs 1 2 3 4 5 Given feedrate F0 and column pressure:

Constraints: Mr < Mrmax, xB > xBmin = 0.98

DOF = degree of freedom

slide-42
SLIDE 42

42

Recycle process: Control active constraints

Active constraint Mr = Mrmax Active constraint xB = xBmin

One unconstrained DOF left for optimization: What more should we control?

Remaining DOF:L

slide-43
SLIDE 43

43

Maximum gain rule: Steady-state gain

Luyben snow-ball rule: Not promising economically Conventional: Looks good

slide-44
SLIDE 44

44

Recycle process: Loss with constant setpoint, cs

Large loss with c = F (Luyben rule) Negligible loss with c =L/F

  • r c = temperature
slide-45
SLIDE 45

45

Recycle process: Proposed control structure

for case with J = V (minimize energy)

Active constraint Mr = Mrmax Active constraint xB = xBmin

Self-optimizing loop: Adjust L such that L/F is constant

slide-46
SLIDE 46

46

Outline

  • About myself
  • Why feedback (and not feedforward) ?
  • Self-optimizing feedback control: What should we control?
  • III. Stabilizing feedback control: Anti-slug control
  • Conclusion
slide-47
SLIDE 47

47

Application stabilizing feedback control:

Anti-slug control

Slug (liquid) buildup Two-phase pipe flow (liquid and vapor)

slide-48
SLIDE 48

48

Slug cycle (stable limit cycle)

Experiments performed by the Multiphase Laboratory, NTNU

slide-49
SLIDE 49

49

0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Riser slugging Steady flow Pulsing flow Usg [m/s] Uso [m/s] Flow map with open valve

Steady flow Steady/Pulsing Pulsing flow Pulsing/Slugging Riser slugging

slide-50
SLIDE 50

50

Experimental mini-loop

slide-51
SLIDE 51

51

p1 p2 z

Experimental mini-loop Valve opening (z) = 100%

slide-52
SLIDE 52

52

p1 p2 z

Experimental mini-loop Valve opening (z) = 25%

slide-53
SLIDE 53

53

p1 p2 z

Experimental mini-loop Valve opening (z) = 15%

slide-54
SLIDE 54

54

p1 p2 z

Experimental mini-loop: Bifurcation diagram

Valve opening z %

No slug Slugging

slide-55
SLIDE 55

55

Avoid slugging?

  • Design changes
  • Feedforward control?
  • Feedback control?
slide-56
SLIDE 56

56

p1 p2 z

Avoid slugging:

  • 1. Close valve (but increases pressure)

Valve opening z %

No slugging when valve is closed

Design change

slide-57
SLIDE 57

57

Avoid slugging:

  • 2. Other design changes to avoid slugging

p1 p2 z

Design change

slide-58
SLIDE 58

58

Minimize effect of slugging:

  • 3. Build large slug-catcher
  • Most common strategy in practice

p1 p2 z

Design change

slide-59
SLIDE 59

59

Avoid slugging: 4. Feedback control?

Valve opening z %

Predicted smooth flow: Desirable but open-loop unstable Comparison with simple 3-state model:

slide-60
SLIDE 60

60

Avoid slugging:

  • 4. ”Active” feedback control

PT PC ref

Simple PI-controller p1 z

slide-61
SLIDE 61

61

Anti slug control: Mini-loop experiments

Controller ON Controller OFF

p1 [bar] z [%]

slide-62
SLIDE 62

62

Anti slug control: Full-scale offshore experiments at Hod-Vallhall field (Havre,1999)

slide-63
SLIDE 63

63

Analysis: Poles and zeros

  • 7.7528
  • 0.0004
  • 4.6276
  • 0.0032
  • 0.0004
  • 0.0004

0.0048 3.4828 0.0131

  • 0.0034

0.25

  • 7.6315
  • 0.0004
  • 4.5722
  • 0.0032
  • 0.0004
  • 0.0004

0.0048 3.2473 0.0142

  • 0.0034

0.175 FW [kg/s] FQ [m3/s] ρT [kg/m3] DP[Bar] P1 [Bar] y z

Operation points: Zeros:

0.96 1.94

DP Poles P1 z

  • 6.21

0.0027±0.0092i 69

0.25

  • 6.11

0.0008±0.0067i 70.05

0.175

P1 ρT DP FT Topside

Topside measurements: Ooops.... RHP-zeros or zeros close to origin

slide-64
SLIDE 64

64

Stabilization with topside measurements: Avoid “RHP-zeros by using 2 measurements

  • Model based control (LQG) with 2 top measurements: DP and

density ρT

20 40 60 80 100 120 0.1 0.2 0.3 0.4 z [−] time [min] 20 40 60 80 100 120 65 70 75 P1 [Bar] 20 40 60 80 100 120 50 51 52 53 54 P2 [Bar]

slide-65
SLIDE 65

65

Summary anti slug control

  • Stabilization of smooth flow regime = $$$$!
  • Stabilization using downhole pressure simple
  • Stabilization using topside measurements possible
  • Control can make a difference!

Thanks to: Espen Storkaas + Heidi Sivertsen and Ingvald Bårdsen

slide-66
SLIDE 66

66

Conclusions

  • Feedback is an extremely powerful tool
  • Complex systems can be controlled by hierarchies (cascades) of single-

input-single-output (SISO) control loops

  • Control the right variables (primary outputs) to achieve ”self-
  • ptimizing control”
  • Feedback can make new things possible (anti-slug)