In Situ Adaptive Tabulation for Real-Time Control
- J. D. Hedengren
- T. F. Edgar
In Situ Adaptive Tabulation for Real-Time Control J. D. Hedengren - - PowerPoint PPT Presentation
In Situ Adaptive Tabulation for Real-Time Control J. D. Hedengren T. F. Edgar The University of Texas at Austin 2004 American Control Conference Boston, MA Outline Model reduction and computational reduction Introduction to ISAT
1 1 1
− − −
_ __ _ __ _
Original ODE model Determine a similarity transform to optimally reduce the model states
(1) (2a) (2b)
Transformed states
− =
32 1 3 _ 2 _ 1 _
0.202
. 4 0.060
. 49 0.015 1 . 9 x x x x x ⋮ ⋯ ⋯ ⋯ Binary distillation model reduction shows the relative weighting of the 32
x32 x1 Inputs States RR x17 x31 x2 Feed Distillate Bottoms
(3)
= ) , ( ) , ( ) , (
_ 3 __ _ 2 __ _ 1 __ 32 _ 4 _ 3 _ 2 _ 1 _
⋮ ɺ ⋮ ɺ ɺ ɺ ɺ u x f u x f u x f x x x x x = ) , ( ) , ( ) , ( ) , ( ) , (
_ 32 __ _ 4 __ _ 3 __ _ 2 __ _ 1 __ 3 _ 2 _ 1 _
u x f u x f u x f u x f u x f x x x ⋮ ⋮ ɺ ɺ ɺ
First Principles Model Reduced model Storage and retrieval of reduced model integrations
Balanced Covariance Matrices ISAT
Approximation Error
ISAT
ISAT
ISAT record.
T
1 2
T
1 2
T
format (on the left) and the tree format (on the right).
grown to include another leaf.
tol T M
1 1
tol T M
1 1
1
T
1
32 state ODE model of binary distillation 5 state reduced model Storage and retrieval of integrations
Balanced Covariance Matrices ISAT
5 10 15 20 25 0.91 0.92 0.93 0.94 Distillate Composition (xA) Time (min) set point 5 states/ISAT 5 states 32 states 32 states/Linear
with 5 states, nonlinear MPC with 5 states, nonlinear MPC with 32 states, and linear MPC.
1 2 3 4 5 20 40 60 80 100 120 Speed-up Factor Optimization # 5 states/ISAT 5 states 32 states 32 states/Linear 0.26 sec avg 0.77 sec avg 9.3 sec avg 22.2 sec avg
The number above each curve indicates the average optimization cpu time on a 2 GHz processor.
CA1 Feed A B Reaction T1 Product q Q CA2 T2 V1 V2
The manipulated variable is the cooling rate to the first CSTR
7 I n p u t s Layer 1 Hyperbolic tangent sigmoid transfer function 20 neurons Layer 2 Linear transfer function 6 neurons 6 O u t p u t s
hidden layer is a hyperbolic tangent function and the output layer is a linear function. This neural net relates 7 inputs to 6 outputs.
1 2 3 4 5 6 7 8 9 10 340 360 380 400 420 440 460 Temperature (K) Time (min) Actual Neural Net ISAT ISAT Retrieval ISAT Growth ISAT Addition
must be added, thereby avoiding extrapolation error.
0.5 1 1.5 2 444 446 448 450 452 454 Reactor #2 Temperature (K) Time (min) set point 6 states/ISAT 6 states 6 states/Neural Net
domain.
0.5 1 1.5 2 435 440 445 450 455 Reactor #2 Temperature (K) Time (min) set point 6 states/ISAT 6 states 6 states/Neural Net
training domain.