In Situ Adaptive Tabulation for Real-Time Control J. D. Hedengren - - PowerPoint PPT Presentation

in situ adaptive tabulation for real time control
SMART_READER_LITE
LIVE PREVIEW

In Situ Adaptive Tabulation for Real-Time Control J. D. Hedengren - - PowerPoint PPT Presentation

In Situ Adaptive Tabulation for Real-Time Control J. D. Hedengren T. F. Edgar The University of Texas at Austin 2004 American Control Conference Boston, MA Outline Model reduction and computational reduction Introduction to ISAT


slide-1
SLIDE 1

In Situ Adaptive Tabulation for Real-Time Control

  • J. D. Hedengren
  • T. F. Edgar

The University of Texas at Austin 2004 American Control Conference Boston, MA

slide-2
SLIDE 2

Outline

  • Model reduction and computational

reduction

  • Introduction to ISAT
  • ISAT theory
  • Application #1: Combined Approach
  • Application #2: ISAT vs. Neural Nets
  • Conclusions
slide-3
SLIDE 3

Model Reduction

  • Optimally reduce the number of model

variables

  • Linear combination of states that retain the

most important dynamics

  • Methods

– Proper Orthogonal Decomposition (or PCA) – Balanced Covariance Matrices

slide-4
SLIDE 4

Model Reduction

) ( ) , ( x h y u x f x = = ɺ

)) ( ( ) ), ( ( ) (

1 1 1

Tx T h y u Tx T f T T x

− − −

= = ɺ

) ( ) , (

_ __ _ __ _

x h y u x f x = = ɺ

Original ODE model Determine a similarity transform to optimally reduce the model states

(1) (2a) (2b)

Transformed states

Tx x =

slide-5
SLIDE 5

Model Reduction

                    − =            

32 1 3 _ 2 _ 1 _

0.202

  • 9

. 4 0.060

  • 5

. 49 0.015 1 . 9 x x x x x ⋮ ⋯ ⋯ ⋯ Binary distillation model reduction shows the relative weighting of the 32

  • riginal states in the top 3 transformed states.

x32 x1 Inputs States RR x17 x31 x2 Feed Distillate Bottoms

(3)

slide-6
SLIDE 6

Model Reduction

                      =                       ) , ( ) , ( ) , (

_ 3 __ _ 2 __ _ 1 __ 32 _ 4 _ 3 _ 2 _ 1 _

⋮ ɺ ⋮ ɺ ɺ ɺ ɺ u x f u x f u x f x x x x x                       =                     ) , ( ) , ( ) , ( ) , ( ) , (

_ 32 __ _ 4 __ _ 3 __ _ 2 __ _ 1 __ 3 _ 2 _ 1 _

u x f u x f u x f u x f u x f x x x ⋮ ⋮ ɺ ɺ ɺ

Truncation Residualization

slide-7
SLIDE 7

Computational Reduction

  • Retain all the of dynamics
  • Storage and retrieval to reduce the

computational cost

  • Methods

– Artificial neural networks – In situ adaptive tabulation (ISAT)

slide-8
SLIDE 8

Combined Approach

  • Combined approach for NMPC

– Model reduction first – Computational reduction second

First Principles Model Reduced model Storage and retrieval of reduced model integrations

Balanced Covariance Matrices ISAT

slide-9
SLIDE 9

ISAT Introduction

Nearby ISAT Record Desired Integration

Approximation Error

φ0 φf δφ0 Αδφ0 φ0

ISAT

φf

ISAT

      = x u φ

  • Fig. 1. Approximation of the desired integration final state with a nearby

ISAT record.

slide-10
SLIDE 10

ISAT Search

  • Binary Tree Architecture

– Search times are O(log2(N)) compared with O(N) for a sequential search φ2 φ1 φ

α φ <

T

v

      + = 2

1 2

φ φ α

T

v

1 2

φ φ − = v α φ >

T

v

slide-11
SLIDE 11

Binary Trees

v φ = α φ2 φ1 Cutting Plane φ1 φ2 Branch Leaves

  • Fig. 2. An illustration of the binary tree structure in the cutting plane

format (on the left) and the tree format (on the right).

slide-12
SLIDE 12

Binary Tree Growth

φ1 φ2 Before φ2 After φ3 φ1

  • Fig. 3. Binary tree growth. A tree with one branch and two leaves is

grown to include another leaf.

slide-13
SLIDE 13

Binary Trees

  • To increase the accuracy of the binary tree

search, multiple binary trees are searched.

  • This increases the probability of finding a better

record.

  • Number of binary trees is a tuning parameter

that balances search speed with search accuracy.

slide-14
SLIDE 14

ISAT Integration

  • Scenario #1: Inside the region of accuracy

φ1 φ

( ) ( )

tol T M

ε φ φ φ φ ≤ − −

1 1

slide-15
SLIDE 15

ISAT Integration

φ1

  • Scenario #2: Outside the region of accuracy but within

the error tolerance φ

( ) ( )

tol T M

ε φ φ φ φ > − −

1 1

Compute Mnew so that the new region is a symmetric, minimum volume ellipsoid that includes φ

slide-16
SLIDE 16

ISAT Integration

φ1

  • Scenario #3: Outside the region of accuracy and
  • utside the error tolerance

φ Define cutting plane

      + = 2

1

φ φ α

T

v

1

φ φ − = v

Find a conservative estimate for the region

  • f accuracy around φ
slide-17
SLIDE 17

Application #1: Binary Distillation

32 state ODE model of binary distillation 5 state reduced model Storage and retrieval of integrations

Balanced Covariance Matrices ISAT

  • Fig. 4. Model and computational reduction flowchart.
slide-18
SLIDE 18

Closed-loop Response

5 10 15 20 25 0.91 0.92 0.93 0.94 Distillate Composition (xA) Time (min) set point 5 states/ISAT 5 states 32 states 32 states/Linear

  • Fig. 5. Closed loop response comparison for nonlinear MPC with ISAT

with 5 states, nonlinear MPC with 5 states, nonlinear MPC with 32 states, and linear MPC.

slide-19
SLIDE 19

CPU times

1 2 3 4 5 20 40 60 80 100 120 Speed-up Factor Optimization # 5 states/ISAT 5 states 32 states 32 states/Linear 0.26 sec avg 0.77 sec avg 9.3 sec avg 22.2 sec avg

  • Fig. 6. Speed-up factor for each of the optimizations shown in Fig. 5.

The number above each curve indicates the average optimization cpu time on a 2 GHz processor.

slide-20
SLIDE 20

Application #2: ISAT vs. neural net

CA1 Feed A B Reaction T1 Product q Q CA2 T2 V1 V2

  • Dual CSTR model
  • Fig. 7. Diagram of two CSTRs in series with a first order reaction.

The manipulated variable is the cooling rate to the first CSTR

slide-21
SLIDE 21

Artificial Neural Network

7 I n p u t s Layer 1 Hyperbolic tangent sigmoid transfer function 20 neurons Layer 2 Linear transfer function 6 neurons 6 O u t p u t s

  • Fig. 8. Neural net with one hidden layer and one output layer. The

hidden layer is a hyperbolic tangent function and the output layer is a linear function. This neural net relates 7 inputs to 6 outputs.

slide-22
SLIDE 22

Open-loop Response

1 2 3 4 5 6 7 8 9 10 340 360 380 400 420 440 460 Temperature (K) Time (min) Actual Neural Net ISAT ISAT Retrieval ISAT Growth ISAT Addition

  • Fig. 9. The error control of ISAT indicates that additional records

must be added, thereby avoiding extrapolation error.

slide-23
SLIDE 23

Closed-loop Response #1

0.5 1 1.5 2 444 446 448 450 452 454 Reactor #2 Temperature (K) Time (min) set point 6 states/ISAT 6 states 6 states/Neural Net

  • Fig. 10. Small closed loop set point change within the training

domain.

slide-24
SLIDE 24

Closed-loop Response #2

0.5 1 1.5 2 435 440 445 450 455 Reactor #2 Temperature (K) Time (min) set point 6 states/ISAT 6 states 6 states/Neural Net

  • Fig. 11. Large closed loop set point change outside of the

training domain.

slide-25
SLIDE 25

Summary and Conclusions

  • Combined approach includes model reduction

followed by computational reduction

  • ISAT is a storage and retrieval method
  • With a 32 state binary distillation, the CPU time

for NMPC is reduced by 85 times

slide-26
SLIDE 26

Summary and Conclusions

  • ISAT indicates when the retrieval is outside of

the storage domain

  • ISAT incorporates automatic error control to

avoid extrapolation errors