computing an optimal control policy for an energy storage
play

Computing an Optimal Control Policy for an Energy Storage Pierre - PowerPoint PPT Presentation

Intro Example of Ocean Power Smoothing solving Dynamic Optimization with Dynamic Programming Computing an Optimal Control Policy for an Energy Storage Pierre Haessig , Thibaut Kovaltchouk, Bernard Multon, Hamid Ben Ahmed, and Stphane Lascaud


  1. Intro Example of Ocean Power Smoothing solving Dynamic Optimization with Dynamic Programming Computing an Optimal Control Policy for an Energy Storage Pierre Haessig , Thibaut Kovaltchouk, Bernard Multon, Hamid Ben Ahmed, and Stéphane Lascaud EDF R&D LME, ENS Cachan SATIE contact : pierre.haessig@ens-cachan.fr EuroSciPy 2013, Brussels, August 24 th 2013 Companion article: http://publications.pierreh.eu 1 / 16

  2. Intro Example of Ocean Power Smoothing solving Dynamic Optimization with Dynamic Programming Outline of the presentation Intro 1 Example of Ocean Power Smoothing 2 solving Dynamic Optimization with Dynamic Programming 3 2 / 16

  3. Intro Example of Ocean Power Smoothing solving Dynamic Optimization with Dynamic Programming Outline of the presentation 1 Intro Example of Ocean Power Smoothing 2 solving Dynamic Optimization with Dynamic Programming 3 3 / 16

  4. Intro Example of Ocean Power Smoothing solving Dynamic Optimization with Dynamic Programming My background Curriculum in Electrical Engineering and Control Theory → Matlab/Simulink kingdom PhD student on Electricity Storage in relation to Wind Energy Python for all my simulation and visualisation work (and a bit of R for time series analysis) 4 / 16

  5. Intro Example of Ocean Power Smoothing solving Dynamic Optimization with Dynamic Programming StoDynProg: a Dynamic Optimization problem solving code Working on the management of Energy Storage with Wind Power, I’ve progressively discovered that: my problems fall in the class of Dynamic Optimization (a quite specific problem structure) the Dynamic Programming approach exists to solve them. basic DP algorithms are “too simple to be worth implementing” !! 5 / 16

  6. Intro Example of Ocean Power Smoothing solving Dynamic Optimization with Dynamic Programming StoDynProg: a Dynamic Optimization problem solving code Working on the management of Energy Storage with Wind Power, I’ve progressively discovered that: my problems fall in the class of Dynamic Optimization (a quite specific problem structure) the Dynamic Programming approach exists to solve them. basic DP algorithms are “too simple to be worth implementing” !! So I’ve started a generic code to solve all my problems and hopefully other Dynamic Optimization problems as well. I wanted to challenge this “genericity claim” by trying it on a different problem: I took it from a topic of interest of my research group: Ocean Power Smoothing (with an Energy Storage). 5 / 16

  7. Intro Example of Ocean Power Smoothing solving Dynamic Optimization with Dynamic Programming Outline of the presentation 1 Intro Example of Ocean Power Smoothing 2 solving Dynamic Optimization with Dynamic Programming 3 6 / 16

  8. Intro Example of Ocean Power Smoothing solving Dynamic Optimization with Dynamic Programming Ocean Wave Energy Harvesting Harvesting electric power from Ocean Waves with “big machines” is an active area of Research & Development. There are no industrialized devices yet (unlike for wind & sun), but rather a wide variety of prototypes machines: (CC-BY-NC picture by polandeze) Wave Energy Converters www.flickr.com/photos/polandeze/3151015577 E.ON P2 Pelamis, July 2011 http://www.pelamiswave.com 7 / 16

  9. Intro Example of Ocean Power Smoothing solving Dynamic Optimization with Dynamic Programming Ocean Energy Converter: the SEAREV Hydro-mechanical design from Centrale Nantes. My group involved in the electric generator design. 8 / 16

  10. Intro Example of Ocean Power Smoothing solving Dynamic Optimization with Dynamic Programming Ocean Energy Converter: the SEAREV a highly fluctuating output 1.0 1.0 0.5 0.5 speed (m/s) 0.0 0.0 0.5 0.5 1.0 1.0 1.0 1.0 power (MW) 0.8 0.8 0.6 0.6 0.4 0.4 0.2 0.2 200 400 600 800 240 250 260 270 280 290 time (s) zoomed time (s) SEAREV is a giant double-pendulum that swings with the waves. An electric generator “brakes” the inner wheel to generates power ( P prod = T (Ω) × Ω ). 9 / 16

  11. Intro Example of Ocean Power Smoothing solving Dynamic Optimization with Dynamic Programming Power smoothing P prod 1.0 P grid Power (MW) 0.8 0.6 0.4 0.2 0.0 240 260 280 300 320 time (s) Objective of this application I want to smooth out the variations of the power production. This requires an energy buffer to store the difference ( P prod − P grid ). 10 / 16

  12. Intro Example of Ocean Power Smoothing solving Dynamic Optimization with Dynamic Programming Power smoothing using an Energy Storage P prod 1.0 P grid Power (MW) 0.8 0.6 0.4 0.2 0.0 240 260 280 300 320 time (s) Renewable Energy Source with an Energy Storage System Production Grid Ocean Wave Energy Energy Converter Management ? Storage 11 / 16

  13. Intro Example of Ocean Power Smoothing solving Dynamic Optimization with Dynamic Programming Power smoothing: control of the Energy Storage P prod 1.0 P grid Power (MW) 0.8 average 0.6 0.4 0.2 0.0 8 Energy E sto (MJ) 6 4 2 0 200 250 300 350 400 time (s) 12 / 16

  14. Intro Example of Ocean Power Smoothing solving Dynamic Optimization with Dynamic Programming Power smoothing: control of the Energy Storage P prod 1.0 P grid linear Power (MW) 0.8 policy average 0.6 0.4 0.2 0.0 8 Energy E sto (MJ) 6 4 2 linear policy 0 200 250 300 350 400 time (s) First, using a simple control law (~policy) . . . quite good result but storage is underused → could do better. 12 / 16

  15. Intro Example of Ocean Power Smoothing solving Dynamic Optimization with Dynamic Programming Power smoothing: control of the Energy Storage P prod 1.0 P grid linear Power (MW) 0.8 policy average 0.6 0.4 0.2 0.0 8 Energy E sto (MJ) 6 4 2 linear policy 0 200 250 300 350 400 time (s) “Doing better” is defined with an additive cost function which penalizes P grid variations: � N − 1 � J = 1 � cost ( P grid ( k ) − P avg ) with N → ∞ N E k = 0 cost J should be minized. 12 / 16

  16. Intro Example of Ocean Power Smoothing solving Dynamic Optimization with Dynamic Programming Power smoothing: control of the Energy Storage P prod 1.0 P grid linear Power (MW) 0.8 policy average 0.6 0.4 0.2 0.0 8 Energy E sto (MJ) 6 4 2 linear policy 0 200 250 300 350 400 time (s) Controlling the storage (choosing P grid at each time step) in order to minimize a cost function is a Stochastic Dynamic Optimization problem (also called Stochastic Optimal Control) 12 / 16

  17. Intro Example of Ocean Power Smoothing solving Dynamic Optimization with Dynamic Programming Power smoothing: control of the Energy Storage P prod 1.0 P grid linear Power (MW) 0.8 policy average 0.6 0.4 0.2 0.0 8 Energy E sto (MJ) 6 4 2 linear policy 0 200 250 300 350 400 time (s) Dynamic Programming (Richard Bellman, ~1950) teaches us that the optimal control is a state feedback policy: P grid ( t ) = µ ( x ( t )) with x = ( E sto , speed , accel ) 12 / 16

  18. Intro Example of Ocean Power Smoothing solving Dynamic Optimization with Dynamic Programming Power smoothing: control of the Energy Storage P prod 1.0 P grid linear Power (MW) 0.8 policy average 0.6 0.4 0.2 0.0 8 Energy E sto (MJ) 6 4 2 linear policy 0 200 250 300 350 400 time (s) Dynamic Programming (Richard Bellman, ~1950) teaches us that the optimal control is a state feedback policy: P grid ( t ) = µ ( x ( t )) with x = ( E sto , speed , accel ) And DP gives us methods to compute this policy function µ . . . 12 / 16

  19. Intro Example of Ocean Power Smoothing solving Dynamic Optimization with Dynamic Programming Power smoothing: control of the Energy Storage P prod 1.0 P grid linear optimized Power (MW) 0.8 policy policy average 0.6 0.4 0.2 0.0 8 Energy E sto (MJ) optimized policy 6 4 2 linear policy 0 200 250 300 350 400 time (s) And now applying the optimal feedback policy µ ∗ , the standard deviation of the power injected to the grid is reduced by ~20 % compared to the heuristic policy. This improvement just comes from a smarter use of the stored energy. 12 / 16

  20. Intro Example of Ocean Power Smoothing solving Dynamic Optimization with Dynamic Programming Outline of the presentation 1 Intro Example of Ocean Power Smoothing 2 solving Dynamic Optimization with Dynamic Programming 3 13 / 16

  21. Intro Example of Ocean Power Smoothing solving Dynamic Optimization with Dynamic Programming Dynamic Programming equation In the end, the optimization problem turns into solving the DP equation: � � J + ˜ + ˜ J ( x ) = min u ∈ U ( x ) E cost ( x , u , w ) J ( f ( x , u , w )) w � �� � � �� � instant cost cost of the future u is control and w is random perturbation, using generic notations It is a functional equation: should be solved for all x The optimal policy µ : x �→ u appears as the argmin . The DP equation is solved on a discrete grid over the state space. With x ∈ R n , ˜ J and µ are computed as n -d numpy arrays. 14 / 16

  22. Intro Example of Ocean Power Smoothing solving Dynamic Optimization with Dynamic Programming Equation solving, Multilinear interpolator The resolution is done purely in Python. Basically a giant for loop with an argmin inside. numpy for handling arrays, with a good amount of vectorization itertools to iterate over the state space grid (of arbitrary dimension) ( introspect for some signature analysis magic) 15 / 16

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend