A model for optimal electricity storage Tiziano De Angelis School - - PowerPoint PPT Presentation

a model for optimal electricity storage
SMART_READER_LITE
LIVE PREVIEW

A model for optimal electricity storage Tiziano De Angelis School - - PowerPoint PPT Presentation

Introduction Setting A non-convex control problem Overview of the solutions A model for optimal electricity storage Tiziano De Angelis School of Mathematics University of Leeds Joint work with G. Ferrari (University of Bielefeld) J.


slide-1
SLIDE 1

Introduction Setting A non-convex control problem Overview of the solutions

A model for optimal electricity storage

Tiziano De Angelis

School of Mathematics – University of Leeds Joint work with

  • G. Ferrari (University of Bielefeld)
  • J. Moriarty (Queen Mary University of London)

Leeds 14th June 2016

slide-2
SLIDE 2

Introduction Setting A non-convex control problem Overview of the solutions

Outline

1 Introduction 2 Setting 3 A non-convex control problem 4 Overview of the solutions

slide-3
SLIDE 3

Introduction Setting A non-convex control problem Overview of the solutions

Optimal control for local electricity markets

Motivation: Optimisation problems which may arise in power systems and energy markets Toy model: Imagine two agents and an electricity store

  • ne invests in power from spot market for storage - either slowly and efficiently,
  • r quickly and less efficiently
  • ther consumes from the store

Investor Consumer Grid connection Yes No Risk Financial Physical Example: Recharging an electric vehicle Investor: vehicle charger Consumer: driver

slide-4
SLIDE 4

Introduction Setting A non-convex control problem Overview of the solutions

A natural problem for the investor

Statement of the problem: (P) What is the optimal way to buy electricity from the market for storage and committing to fully meet the demand (at a random future time)? The mathematical features Problem (P) is an optimal control problem with stochastic features (electricity price, time of physical demand, etc.). Electricity price can go negative Electricity does not have to be purchased at a rate (singular controls) and the investor enjoys limited storage (finite fuel) Random arrival of demand from the customer (random time-horizon)

slide-5
SLIDE 5

Introduction Setting A non-convex control problem Overview of the solutions

Basic model assumptions

The spot price (Xt)t≥0 evolves according to a stochastic dynamic Demand for 1 unit of electricity arrives at a random time τ. We take τ indep. of X with τ ∼ Exp(λ) for some λ > 0. The storage level C cannot exceed 1 unit (limited storage) and cannot be

  • decreased. It follows a purely controlled dynamics

t = c + νt,

t > 0, Cν

0 = c ∈ [0,1]

where (νt)t≥0 is the cumulative amount of electricity purchased from the grid. Notice that t → νt is increasing and left-continuous with ν0 = 0.

slide-6
SLIDE 6

Introduction Setting A non-convex control problem Overview of the solutions

The investor aims at minimising future, expected costs. Prior to time τ: cost of increasing the storage At time τ: terminal cost= XτΦ(Cν

τ )

  • proportional to the spot price through a function of the shortfall,

An example: Φ(c) = α(1 − c)β, α > 0, β ≥ 2 Market discount rate r = 0 (just for simplicity) Casting problem (P) The investor aims at finding an optimal strategy ν∗ for the problem U(x,c) = inf

ν

E τ Xtdνt + XτΦ(Cτ)

  • (1)

where E[·] is the average over all possible scenarios for the spot price and τ.

slide-7
SLIDE 7

Introduction Setting A non-convex control problem Overview of the solutions

The investor aims at minimising future, expected costs. Prior to time τ: cost of increasing the storage At time τ: terminal cost= XτΦ(Cν

τ )

  • proportional to the spot price through a function of the shortfall,

An example: Φ(c) = α(1 − c)β, α > 0, β ≥ 2 Market discount rate r = 0 (just for simplicity) Casting problem (P) The investor aims at finding an optimal strategy ν∗ for the problem U(x,c) = inf

ν

E τ Xtdνt + XτΦ(Cτ)

  • (1)

where E[·] is the average over all possible scenarios for the spot price and τ.

slide-8
SLIDE 8

Introduction Setting A non-convex control problem Overview of the solutions

The investor aims at minimising future, expected costs. Prior to time τ: cost of increasing the storage At time τ: terminal cost= XτΦ(Cν

τ )

  • proportional to the spot price through a function of the shortfall,

An example: Φ(c) = α(1 − c)β, α > 0, β ≥ 2 Market discount rate r = 0 (just for simplicity) Casting problem (P) The investor aims at finding an optimal strategy ν∗ for the problem U(x,c) = inf

ν

E τ Xtdνt + XτΦ(Cτ)

  • (1)

where E[·] is the average over all possible scenarios for the spot price and τ.

slide-9
SLIDE 9

Introduction Setting A non-convex control problem Overview of the solutions

The investor aims at minimising future, expected costs. Prior to time τ: cost of increasing the storage At time τ: terminal cost= XτΦ(Cν

τ )

  • proportional to the spot price through a function of the shortfall,

An example: Φ(c) = α(1 − c)β, α > 0, β ≥ 2 Market discount rate r = 0 (just for simplicity) Casting problem (P) The investor aims at finding an optimal strategy ν∗ for the problem U(x,c) = inf

ν

E τ Xtdνt + XτΦ(Cτ)

  • (1)

where E[·] is the average over all possible scenarios for the spot price and τ.

slide-10
SLIDE 10

Introduction Setting A non-convex control problem Overview of the solutions

A SSC problem with non-convex costs

Mathematical context and challenges. Problem (1) is a singular stochastic control (SSC) problem with 2-dimensional state space. Cost of using control is linear in ν while ν → XΦ(c + ν) is not convex. We want to minimise a functional ν → J (ν) which is neither convex nor concave!

slide-11
SLIDE 11

Introduction Setting A non-convex control problem Overview of the solutions

Existing mathematical literature and novelty mathematics of SSC problems dates back to 1966 (Bather and Chernoff) and it has been widely developed since then by many authors a key ingredient is often the convexity of the performance criterion w.r.t. the control variable (concavity for maximisation problems) under convexity assumption (and some other standard conditions): (i) there exists a unique optimal control (ii) the state spaces is split in action and inaction region, and (iii) this is of reflecting type What kind of solution should we expect when convexity breaks down?

slide-12
SLIDE 12

Introduction Setting A non-convex control problem Overview of the solutions

Existing mathematical literature and novelty mathematics of SSC problems dates back to 1966 (Bather and Chernoff) and it has been widely developed since then by many authors a key ingredient is often the convexity of the performance criterion w.r.t. the control variable (concavity for maximisation problems) under convexity assumption (and some other standard conditions): (i) there exists a unique optimal control (ii) the state spaces is split in action and inaction region, and (iii) this is of reflecting type What kind of solution should we expect when convexity breaks down?

slide-13
SLIDE 13

Introduction Setting A non-convex control problem Overview of the solutions

It turns out that the structure of our problem depends on the choice of the price dynamics and on the sign of k(c) := λ + λΦ′(c) for c ∈ [0,1] (related to marginal cost of investment) In many practical examples there exists a unique ˆ c ∈ (0,1) s.t. k(c) > 0, c > ˆ c k(c) < 0, c < ˆ c

slide-14
SLIDE 14

Introduction Setting A non-convex control problem Overview of the solutions

It turns out that the structure of our problem depends on the choice of the price dynamics and on the sign of k(c) := λ + λΦ′(c) for c ∈ [0,1] (related to marginal cost of investment) In many practical examples there exists a unique ˆ c ∈ (0,1) s.t. k(c) > 0, c > ˆ c k(c) < 0, c < ˆ c

slide-15
SLIDE 15

Introduction Setting A non-convex control problem Overview of the solutions

Two examples

Example 1. Spot prices with mean reversion, e.g. Ornstein-Uhlenbeck model dXt = θ(µ − Xt)dt + σdBt, X0 = x θ force of mean reversion, µ long-term average, σ diffusion coefficient. Intuition: harmonic oscillator + Gaussian noise Example 2. Spot prices with Xt ∼ N(x, √ t) for some given x ∈ R, i.e. Brownian motion model Xt = x + Bt see http://stuartreid.co.za/interactive-stochastic-processes/ One may choose other models to fit real data - here we were interested in mathematical tractability and explicit solutions.

slide-16
SLIDE 16

Introduction Setting A non-convex control problem Overview of the solutions

Two examples

Example 1. Spot prices with mean reversion, e.g. Ornstein-Uhlenbeck model dXt = θ(µ − Xt)dt + σdBt, X0 = x θ force of mean reversion, µ long-term average, σ diffusion coefficient. Intuition: harmonic oscillator + Gaussian noise Example 2. Spot prices with Xt ∼ N(x, √ t) for some given x ∈ R, i.e. Brownian motion model Xt = x + Bt see http://stuartreid.co.za/interactive-stochastic-processes/ One may choose other models to fit real data - here we were interested in mathematical tractability and explicit solutions.

slide-17
SLIDE 17

Introduction Setting A non-convex control problem Overview of the solutions

What do we mean by “solution”? An explicit characterisation of the optimal storage strategy - in terms of simple algebraic equations An analytical expression of the value function U - this can be interpreted as the minimum price for a recharge Remark: These expressions can be used to implement real strategies and calculate real prices after the model has been calibrated on real data.

slide-18
SLIDE 18

Introduction Setting A non-convex control problem Overview of the solutions

What do we mean by “solution”? An explicit characterisation of the optimal storage strategy - in terms of simple algebraic equations An analytical expression of the value function U - this can be interpreted as the minimum price for a recharge Remark: These expressions can be used to implement real strategies and calculate real prices after the model has been calibrated on real data.

slide-19
SLIDE 19

Introduction Setting A non-convex control problem Overview of the solutions

What do we mean by “solution”? An explicit characterisation of the optimal storage strategy - in terms of simple algebraic equations An analytical expression of the value function U - this can be interpreted as the minimum price for a recharge Remark: These expressions can be used to implement real strategies and calculate real prices after the model has been calibrated on real data.

slide-20
SLIDE 20

Introduction Setting A non-convex control problem Overview of the solutions

What do we mean by “solution”? An explicit characterisation of the optimal storage strategy - in terms of simple algebraic equations An analytical expression of the value function U - this can be interpreted as the minimum price for a recharge Remark: These expressions can be used to implement real strategies and calculate real prices after the model has been calibrated on real data.

slide-21
SLIDE 21

Introduction Setting A non-convex control problem Overview of the solutions

Example 1. ˆ c < 0, i.e. k(c) > 0 for c ∈ [0,1] Optimal storage strategy: “pure reflection”

slide-22
SLIDE 22

Introduction Setting A non-convex control problem Overview of the solutions

Example 1. ˆ c > 0, i.e. k(c) < 0 for c ∈ [0,1] Optimal storage strategy: “fill in the inventory at once” The choice of Φ and λ determine the behaviour of k(·) and therefore have a critical influence on the optimal strategy. The general case of k(c) changing sign is analytically intractable in this example.

slide-23
SLIDE 23

Introduction Setting A non-convex control problem Overview of the solutions

Example 1. ˆ c > 0, i.e. k(c) < 0 for c ∈ [0,1] Optimal storage strategy: “fill in the inventory at once” The choice of Φ and λ determine the behaviour of k(·) and therefore have a critical influence on the optimal strategy. The general case of k(c) changing sign is analytically intractable in this example.

slide-24
SLIDE 24

Introduction Setting A non-convex control problem Overview of the solutions

Example 2. ˆ c ∈ (0,1), i.e. k(c) changes its sign in c ∈ [0,1] Optimal storage strategy: “fill the inventory in multiple phases” Remark: phase (III) is actually an extreme case of a “reflecting” strategy. Therefore this example gives a mixture of the strategies observed in Example 1.

slide-25
SLIDE 25

Introduction Setting A non-convex control problem Overview of the solutions

Example 2. ˆ c ∈ (0,1), i.e. k(c) changes its sign in c ∈ [0,1] Optimal storage strategy: “fill the inventory in multiple phases” Remark: phase (III) is actually an extreme case of a “reflecting” strategy. Therefore this example gives a mixture of the strategies observed in Example 1.

slide-26
SLIDE 26

Introduction Setting A non-convex control problem Overview of the solutions

Few words about our methodology

From Dynamic Programming Principle one expects U to solve - in an appropriate sense to be determined - a variational problem (Hamilton-Jacobi-Bellman equation). That is we conjecture max

  • − LU + λU, −x − Uc
  • = 0

a.e. (x,c) ∈ R × (0,1) (2) with U(x,1) = 0, x ∈ R and where Example 1 = ⇒ (Lf)(x) = σ2 2 f′′(x) + θ(µ − x)f′(x) Example 2 = ⇒ (Lf)(x) = 1 2f′′(x). for f ∈ C2(R).

slide-27
SLIDE 27

Introduction Setting A non-convex control problem Overview of the solutions

On the set CU := {(x,c) : Uc(x,c) > −x} the HJB equation implies −LU + λU = 0. Hence the problem is reduced to: Find a function U such that −LU + λU = 0

  • n CU

−LU + λU ≤ 0

  • utside CU

and U is “sufficiently regular”. The above goes under the name of free-boundary problem (FBP).

slide-28
SLIDE 28

Introduction Setting A non-convex control problem Overview of the solutions

What we did: We studied fine properties of the cost functional to come up with an educated guess about the shape of CU . This helps rewriting the FBP in a more convenient way which makes it possible to find analytical solutions.

slide-29
SLIDE 29

Introduction Setting A non-convex control problem Overview of the solutions

If we look at Example 1... Jx,c(ν) := E ∞ e−λsλXx

s Φ(Cc,ν s

)ds + ∞ e−λsXx

s dνs

  • A lead to the optimal control

Strategy Cost Total inaction, i.e. νt = 0, t ∈ [0,∞) Jx,c(0) = xΦ(c) Small initial increase, i.e. νt = νδ

t := δ, t ∈ (0,∞)

Jx,c(νδ) = x(δ + Φ(c + δ)) Large initial increase, i.e. νt = νf

t := 1 − c, t ∈ (0,∞)

Jx,c(νf ) = x(1 − c) For c < ˆ c, increase to ˆ c then act optimally Jx,c(νˆ

c) = x(ˆ

c − c) + U(x,ˆ c) i.e. νt = νˆ

c t := ˆ

c − c + “opt.” t ∈ (0,∞)

slide-30
SLIDE 30

Introduction Setting A non-convex control problem Overview of the solutions

If we look at Example 1... Jx,c(ν) := E ∞ e−λsλXx

s Φ(Cc,ν s

)ds + ∞ e−λsXx

s dνs

  • A lead to the optimal control

Strategy Cost Total inaction, i.e. νt = 0, t ∈ [0,∞) Jx,c(0) = xΦ(c) Small initial increase, i.e. νt = νδ

t := δ, t ∈ (0,∞)

Jx,c(νδ) = x(δ + Φ(c + δ)) Large initial increase, i.e. νt = νf

t := 1 − c, t ∈ (0,∞)

Jx,c(νf ) = x(1 − c) For c < ˆ c, increase to ˆ c then act optimally Jx,c(νˆ

c) = x(ˆ

c − c) + U(x,ˆ c) i.e. νt = νˆ

c t := ˆ

c − c + “opt.” t ∈ (0,∞)

slide-31
SLIDE 31

Introduction Setting A non-convex control problem Overview of the solutions

If we look at Example 1... Jx,c(ν) := E ∞ e−λsλXx

s Φ(Cc,ν s

)ds + ∞ e−λsXx

s dνs

  • A lead to the optimal control

Strategy Cost Total inaction, i.e. νt = 0, t ∈ [0,∞) Jx,c(0) = xΦ(c) Small initial increase, i.e. νt = νδ

t := δ, t ∈ (0,∞)

Jx,c(νδ) = x(δ + Φ(c + δ)) Large initial increase, i.e. νt = νf

t := 1 − c, t ∈ (0,∞)

Jx,c(νf ) = x(1 − c) For c < ˆ c, increase to ˆ c then act optimally Jx,c(νˆ

c) = x(ˆ

c − c) + U(x,ˆ c) i.e. νt = νˆ

c t := ˆ

c − c + “opt.” t ∈ (0,∞)

slide-32
SLIDE 32

Introduction Setting A non-convex control problem Overview of the solutions

If we look at Example 1... Jx,c(ν) := E ∞ e−λsλXx

s Φ(Cc,ν s

)ds + ∞ e−λsXx

s dνs

  • A lead to the optimal control

Strategy Cost Total inaction, i.e. νt = 0, t ∈ [0,∞) Jx,c(0) = xΦ(c) Small initial increase, i.e. νt = νδ

t := δ, t ∈ (0,∞)

Jx,c(νδ) = x(δ + Φ(c + δ)) Large initial increase, i.e. νt = νf

t := 1 − c, t ∈ (0,∞)

Jx,c(νf ) = x(1 − c) For c < ˆ c, increase to ˆ c then act optimally Jx,c(νˆ

c) = x(ˆ

c − c) + U(x,ˆ c) i.e. νt = νˆ

c t := ˆ

c − c + “opt.” t ∈ (0,∞)

slide-33
SLIDE 33

Introduction Setting A non-convex control problem Overview of the solutions

If we look at Example 1... Jx,c(ν) := E ∞ e−λsλXx

s Φ(Cc,ν s

)ds + ∞ e−λsXx

s dνs

  • A lead to the optimal control

Strategy Cost Total inaction, i.e. νt = 0, t ∈ [0,∞) Jx,c(0) = xΦ(c) Small initial increase, i.e. νt = νδ

t := δ, t ∈ (0,∞)

Jx,c(νδ) = x(δ + Φ(c + δ)) Large initial increase, i.e. νt = νf

t := 1 − c, t ∈ (0,∞)

Jx,c(νf ) = x(1 − c) For c < ˆ c, increase to ˆ c then act optimally Jx,c(νˆ

c) = x(ˆ

c − c) + U(x,ˆ c) i.e. νt = νˆ

c t := ˆ

c − c + “opt.” t ∈ (0,∞)

slide-34
SLIDE 34

Introduction Setting A non-convex control problem Overview of the solutions

Consider c ∈ (ˆ c,1], hence k(c) > 0. Letting δ arbitrarily small... Jx,c(νδ) < Jx,c(0) iff x < 0 (lower optimal boundary) Jx,c(νf ) < Jx,c(νδ) for x < 0 and δ < 1 − c (large jumps) We then expect a lower repelling boundary Consider c ∈ [0,ˆ c), hence k(c) < 0. Letting δ arbitrarily small... Jx,c(νδ) < Jx,c(0) iff x > 0 (upper optimal boundary) Jx,c(νˆ

c) < Jx,c(νδ) for x > 0 (“large” jumps)

Jx,c(νˆ

c) < Jx,c(0) for x < 0 and |c − ˆ

c| → 0 (lower optimal boundary) We then expect both lower and upper repelling boundaries

slide-35
SLIDE 35

Introduction Setting A non-convex control problem Overview of the solutions

Consider c ∈ (ˆ c,1], hence k(c) > 0. Letting δ arbitrarily small... Jx,c(νδ) < Jx,c(0) iff x < 0 (lower optimal boundary) Jx,c(νf ) < Jx,c(νδ) for x < 0 and δ < 1 − c (large jumps) We then expect a lower repelling boundary Consider c ∈ [0,ˆ c), hence k(c) < 0. Letting δ arbitrarily small... Jx,c(νδ) < Jx,c(0) iff x > 0 (upper optimal boundary) Jx,c(νˆ

c) < Jx,c(νδ) for x > 0 (“large” jumps)

Jx,c(νˆ

c) < Jx,c(0) for x < 0 and |c − ˆ

c| → 0 (lower optimal boundary) We then expect both lower and upper repelling boundaries

slide-36
SLIDE 36

Introduction Setting A non-convex control problem Overview of the solutions

Concluding remarks. We presented a tractable model for electricity storage which accounts for

Limited storage Unpredictable spot prices of electricity Negative prices and mean reversion Unpredictable time of arrival of the demand

Spot prices are observed on the market and revealed to the firm manager in real time We construct storage strategies that react optimally to the observed prices so that no foresight is needed on prices and demand Optimality is intended as an average over all possible market/demand scenarios

slide-37
SLIDE 37

Introduction Setting A non-convex control problem Overview of the solutions

Thanks.

This presentation is based on De Angelis, T., Ferrari, G., Moriarty, J. (2015). A solvable two-dimensional degenerate singular stochastic control problem with non convex costs. Preprint. arXiv:1411.2428. De Angelis, T., Ferrari, G., Moriarty, J. (2015). A nonconvex singular stochastic control problem and its related optimal stopping boundaries, SIAM J. Control Optim. 53 (3)