Optimization problems in finance under full and partial information - - PowerPoint PPT Presentation

optimization problems in finance under full and partial
SMART_READER_LITE
LIVE PREVIEW

Optimization problems in finance under full and partial information - - PowerPoint PPT Presentation

Optimization problems in finance under full and partial information Wolfgang Runggaldier University of Padova, Italy www.math.unipd.it/runggaldier Tutorial for the Special Semester on Stochastics with Emphasis on Finance, Linz, September 2008


slide-1
SLIDE 1

Optimization problems in finance under full and partial information

Wolfgang Runggaldier University of Padova, Italy www.math.unipd.it/runggaldier Tutorial for the Special Semester on Stochastics with Emphasis on Finance, Linz, September 2008 - December 2008.

slide-2
SLIDE 2

OUTLINE

  • 1. Financial market models
  • Discrete time
  • Continuous time

⋆ Continuous and discontinuous trajectories

  • 2. Investment strategies self financing strategies
  • Discrete time
  • Continuous time
  • 3. Optimization problems
  • Standard portfolio optimization
  • Optimization in Insurance
  • Hedging and benchmark tracking
slide-3
SLIDE 3
  • 5. Optimization, arbitrage and martingale measures
  • 6. Dynamic Programming (for standard portfolio optimization)
  • Discrete time
  • Continuous time (HJB equations)
  • Approximations
  • 6. Martingale method
  • Preliminaries from hedging
  • Discrete time
  • Continuous time
  • Discussion of the martingale method vs DP
slide-4
SLIDE 4
  • 7. Elements of portfolio optimization (and hedging) under

incomplete information

  • min-max approach
  • Adaptive approaches

⋆ A. A first discrete time case ⋆ B. A second discrete time case ⋆ C. A continuous time case

  • robust approaches
  • 8. Portfolio optimization for a purely discontinuous market

model

slide-5
SLIDE 5

Market models (discrete time ∆ = 1)

  • A locally riskless asset (money market account)

Bn+1 = Bn(1+rn) ↔ Bn+1 − Bn Bn = rn (rn known at n)

  • Risky assets (for the moment just one)

Sn+1 = Sn(1 + an+1) ↔ Sn+1 − Sn Sn = an+1

  • (an+1 unknown at n)

Sn+1 = Snξn+1

  • Example:

ξn = u with probability p d with probability 1 − p

slide-6
SLIDE 6

Market models (transition to continuous time)

  • Locally riskless asset

Bt+∆−Bt = Btrt∆ − → dBt = Btrtdt (cont. compounding)

  • Risky asset (at+∆ = at∆ + σtξt+∆ with ξt+∆ ∼ N(0, ∆))

St+∆ = St (1 + at∆ + σtξt+∆) Let wt be a process s.t. ∆wt := wt+∆ − wt ∼ N(0, ∆) (Wiener process) St+∆ = St (1 + at∆ + σtξt+∆) ↓ dSt = St [atdt + σtdwt]

slide-7
SLIDE 7

Price processes with continuous trajectories dSt = St [atdt + σtdwt] (∆wt := wt+∆ − wt ∼ N(0, ∆) − → dwt ∼ √ dt →                d log St =

  • at − 1

2σ2 t

  • dt + σtdwt

St+∆ = St exp t+∆

t

  • as − 1

2σ2 s

  • ds +

t+∆

t

σsdws

  • =

Stξt+∆ → All trajectories of St are continuous functions of t

slide-8
SLIDE 8

Price processes with discontinuous trajectories

  • Let τn (0 = τ0 < τ1 < · · · ) be a sequence of random times

where a certain event happens

  • Let Nt = n if t ∈ [τn, τn+1) ⇔ Nt =

n≥1 1{τn≤t}

→ Nt is a counting process and dNt ∈ {0, 1}.

slide-9
SLIDE 9
  • Let St change only at times τn with return γn > −1 at τn

Sτn − Sτ−

n

Sτ−

n

= γn ↔ Sτn = Sτ−

n (1 + γn) ↔ dSt = St−γtdNt

⇒ St = S0 Nt

n=1(1 + γτn) = S0 exp

Nt

n=1 log(1 + γτn)

  • =

S0 exp t

0 log(1 + γt)dNt

slide-10
SLIDE 10

Combining the two (jump diffusion models) dSt = St− [atdt + σtdwt + γtdNt] implies St+∆ = St exp t+∆

t

  • as − 1

2σ2

s

  • ds +

t+∆

t

σsdws

  • Nt+∆
  • n=Nt

(1 + γn) = St exp t+∆

t

  • as − 1

2σ2

s

  • ds +

t+∆

t

σsdws + t+∆

t

log(1 + γt) dNt

slide-11
SLIDE 11

More risky assets

  • A certain number K of risky assets

dSi

t = Si tai tdt + Si t M

  • j=1

σi,j

t dwj t ;

i = 1, · · · , K (dSt = (diag St) Atdt + (diag St) Σtdwt) with wt = [w1

t, · · · , wM t ]′ an M− Wiener process on a filtered

probability space (Ω, F, Ft, P), (Ft = Fw

t ).

→ In the classical Black-Scholes model ai

t

and σi,j

t

are deterministic ⇒ St : a lognormal process.

slide-12
SLIDE 12
  • The coefficients may also be stochastic processes that are either:

i) adapted to Fw

t , or

ii) Markov modulated (regime switching models) dSt = (diag St) At(Zt)dt + (diag St) Σt(Zt)dwt → Zt an exogenous multivariate Markov factor process (volume

  • f trade, level of interest rates or, generically, the “state of the

economy”). → Zt may be directly observable or not.

slide-13
SLIDE 13
  • Price trajectories may exhibit a jumping behaviour, then

dSt = (diag St) Atdt + (diag St) Σtdwt + (diagSt−)Ψt−dNt with Nt = (N 1

t , · · · , N H t )′ a counting process and Ψi,j t

> −1 (Jump-diffusion models). → More general driving processes are possible (Levy, fractional BM).

  • On small time scales prices do not follow continuous trajectories,

but rather piecewise constant ones with jumps at random points in time. → May be modeled by continuous trajectories sampled at the jumps of a Poisson process.

slide-14
SLIDE 14

Investment strategies (discrete time)

  • Given Bn, Si

n (i = 1, · · · , K) let φ0 n, φi n be the number of riskless

  • resp. risky assets held in the portfolio in period n

→ φn = [φ0

n, φ1 n, · · · , φK n ] predictable (determined on the basis

  • f the information Fn−1)
  • The value of the corresponding portfolio is then

Vn = φ0

nBn + K

  • i=1

φi

nSi n

slide-15
SLIDE 15

Self financing property (with consumption cn in period n) φ0

nBn + K

  • i=1

φi

nSi n = φ0 n+1Bn + K

  • i=1

φi

n+1Si n + cn

→ Vn+1 = φ0

n+1Bn+1 + K i=1 φi n+1Si n+1

=

  • φ0

n+1Bn + K i=1 φi n+1Si n

  • +φ0

n+1(Bn+1 − Bn) + K i=1 φi n+1(Si n+1 − Si n)

→ ∆Vn = φ0

n+1∆Bn + K i=1 φi n+1∆Si n − cn

slide-16
SLIDE 16

With proportional transaction costs (rate γi) ∆Vn = φ0

n+1∆Bn + K

  • i=1

φi

n+1∆Si n − K

  • i=1

γiSi

n | φi n+1 − φi n |

slide-17
SLIDE 17

Alternative representation of the investment strategy πi

n = φi

n+1Si n

Vn

, i = 1, · · · , K with S0

n = Bn

determined on the basis of the informationFn. Notice that K

i=0 φi n+1Si n = Vn − cn

→ K

i=0 πi n = 1 − cn Vn

K

i=0 πi n = 1 for cn = 0

π0

n = 1 − K i=1 πi n − cn Vn

slide-18
SLIDE 18

Taking ∆ = 1 and using Si

n+1 = Si nξi n+1 :

Vn+1 = Vn + φ0

n+1∆Bn + K i=1 φi n+1∆Si n − cn

= Vn + φ0

n+1Bnrn + K i=1 φi n+1Si n(ξi n+1 − 1) − cn

= Vn

  • (1 + rn) + K

i=1 πi n

  • ξi

n+1 − (1 + rn)

  • − cn(1 + rn)

:= Gn(Vn, πn, cn, ξn+1) → Autonomous evolution of Vn, driven directly by ξn and without reference to Si

n.

slide-19
SLIDE 19

Investment strategies/controls (continuous time) Agents invest in the market according to an investment strategy, e.g. ¯ φt = (φ0

t, φ1 t, · · · , φK t ) := (φ0 t, φt)

with φi

t : number of shares of asset i held in the portfolio at time

t (i = 0 : riskless asset). ¯ φt is taken to be predictable w. r. to Ft. Vt = φ0

tBt + K

  • i=1

φi

tSi t

is the corresponding value process.

slide-20
SLIDE 20

Equivalently πt = (π1

t , · · · , πK t )

with πi

t = φi

tSi t

Vt , (i = 1, · · · , K) the fraction of wealth invested in

the risky asset i at time t. (1 − K

i=1 πi t : fraction invested in

the riskless asset).

slide-21
SLIDE 21

Self-financing strategies/portfolios Denoting by ct the consumption rate at time t, require dVt = φ0

tdBt + K

  • i=1

φi

tdSi t − ctdt

  • dVt

Vt = π0

t

dBt Bt +

K

  • i=1

πi

t

dSi

t

Si

t

− ct Vt dt

  • r, equivalently, considering price models with continuous

trajectories, dVt = Vt [rtdt + πt(At − rt1)dt + πtΣtdwt] − ctdt → With πt instead of φt autonomous dynamics of Vt without reference to Bt or St.

slide-22
SLIDE 22
  • Agents choose for their investments a subset of the available

assets (with prices Si

t, i = 1, · · · , K).

→ In addition to actual portfolios one may then consider also theoretical self-financing portfolios that include most of the assets in the market. → Such theoretical portfolios may serve as index or benchmark (e.g. S&P 500) with the goal of the investor being to track or beat a given benchmark.

slide-23
SLIDE 23
  • With the index portfolio strategy expressed as fraction of wealth:

dIt = It [αtdt + σtdwt + ρtdvt] with wt the (multivariate) Wiener driving the assets chosen for actual investment and vt a Wiener independent of wt and resulting from the disturbances that drive exclusively the additional assets on the market. → A benchmark may also represent other economic quantities such as a wage process in an insurance context.

slide-24
SLIDE 24

Optimization/control problems

  • Consider small investors, i.e. their investment decisions do not

affect the prices on the market.

  • Two groups of “state variables” :

i) asset prices (and benchmark) are uncontrolled ii) the portfolio value process is the only controlled state variable (autonomous dynamics under self financing).

  • The objective function depends generally on both types of state

variables.

slide-25
SLIDE 25

Standard classical optimization problem (maximization of expected utility from consumption and terminal wealth) Neglecting transaction costs but considering as additional control variable ct that represents the rate of consumption at time t :          dV π,c

t

= V π,c

t

[rtdt + π′

t(At − rt1)dt + π′ tΣtdwt] − ctdt

max

π,c EV0

T U1(t, ct)dt + U2(V π,c

T

)

  • with U1(·) and U2(·) utility functions from consumption and

terminal wealth respectively that satisfy the usual assumptions.

slide-26
SLIDE 26

Insurance context

  • The fundamental quantity, corresponding to Vt, is here the risk

process that, without investment or reinsurance, is given by Rt = s + ct −

Nt

  • i=1

Xi := s + ct − Dt with Xi : claim sizes ; c : premium intensity → Additional features : ⋆ Investment ⋆ Reinsurance ⋆ Other

slide-27
SLIDE 27

Investment

  • One can invest in one or more risky assets. Assume one such

asset with price dynamics dZt = aZtdt + bZtdwt and let At be the amount invested in this asset (the rest in the bank account with interest rate r).

  • With θt = At

Zt denoting then the number of shares held in the

risky asset, one obtains for the risk process dRθ

t = c dt − dDt + θtdZt + r(Rθ t − θtZt) dt

slide-28
SLIDE 28

Reinsurance

  • There are various forms of reinsurance, here we mention excess-
  • f-loss reinsurance : the insurer pays min(X, b) and the reinsurer

pays (X −b)+. For this the insurer pays the reinsurance premium h(b) to the insurer.

  • One then obtains for the risk process

Rb

t = s + ct −

t h(bs)ds −

Nt

  • i=1

min {bTi, Xi}

slide-29
SLIDE 29

Objective

  • A typical objective is the minimization of the ruin probability

P{Rt < 0 for some t | R0 = s}.

  • One may consider other objective functions where πt denotes a

generic control at time t (may be θt or bt), i.e. max

π

E τ U(Rπ

t , πt) dt + U(Rπ τ , πτ) | R0 = s

  • with τ := inf{t ≥ 0 | Rπ

t < 0}.

→ Is of the previous standard form (with a random horizon).

slide-30
SLIDE 30

Hedging problem

  • Given an (underlying) price process St and a future maturity T,

let HT ∈ FS

T

(contingent claim) → It represents a liability depending on the future evolution of the underlying S. This implies some risk, and the purpose is to hedge this risk by investing in a self financing portfolio.

slide-31
SLIDE 31
  • Let V φ

t = V0 +

t

  • φ0

sdBs + N

  • i=1

φi

sdSi s

  • be the value in t of a

self financing portfolio. → Determine, if possible, V0 and ¯ φt s.t. V

¯ φ T = HT

a.s. (equivalently V π

T = HT

a.s.); i.e. such that one has perfect duplication/replication of the claim. → If this is possible for any HT, then the market is said to be complete.

slide-32
SLIDE 32

→ If the market is not complete, or the initially available capital is not sufficient for perfect replication, one has to choose a hedging criterion. Two possible criteria are :

  • Minimization of shortfall risk

(an asymmetric downside-type criterion) ES0,V0

  • L
  • (HT − V π

T )+

→ min

  • Minimization of quadratic loss

(a symmetric risk criterion) ES0,V0

  • (HT − V π

T )2

→ min

slide-33
SLIDE 33

General problem formulation

(over a finite horizon and including a benchmark)                            dSt = (diag St) Atdt + (diag St) Σtdwt dIt = It [αtdt + σtdwt + ρtdvt] dV π,c

t

= V π,c

t

[rtdt + πt(At − rt1)dt + πtΣtdwt] − ctdt min

π,c ES0,I0,V0

T

T −δ

L1(It, V π,c

t

, πt, ct)dt + L2(ST, IT, V π

T )

slide-34
SLIDE 34
  • L1(·) and L2(·) are loss functions that may be of the following

form L1(It, V π,c

t

, πt, ct) = (g(It) − η − V π

t )+ − ct

L2(ST, IT, V π

T ) = (H(IT, ST) − V π T )+

for some functions g(·) and H(·) and for an η > 0. → Includes hedging of a contingent claim H(ST).

slide-35
SLIDE 35

One may consider different variants of the basic setup corresponding to possible variants of a general stochastic control problem as e.g. :

  • The basic dynamics of the price vector St may be generalized as

described previously.

  • If some of the assets are subject to default, the fixed horizon

may be replaced by a stopping time τ. This stopping time may also become a control variable for the hedging problem of American-type options.

  • The horizon may become infinite for problems of life-time

consumption or when the objective is to maximize the growth

  • rate. Maximizing the risk sensitized growth rate leads to a risk

sensitive control problem.

slide-36
SLIDE 36
  • In the presence of transaction costs a convenient way to define a

trading strategy is by the total number of shares of the various assets that are purchased or sold up to the current time t, i.e. Li

t and M i t respectively. Letting λi and µi denote the cost rate

for buying respectively selling asset i, the self financing condition then leads to dVt = φ0

tdBt + N

  • i=1
  • φi

tdSi t − Si t

  • λidLi

t + µidM i t

  • with φi

t = Li t − M i

  • t. In this way one obtains a singular stochastic

control problem.

  • The inclusion also of fixed transaction costs may lead to impulsive

control problems. This kind of problems may also arise when a central bank intervenes to control the exchange rate..

slide-37
SLIDE 37

Optimization, arbitrage, and martingale measures

  • Arbitrage opportunity (OA) : existence of a self financing

portfolio φ s.t. V φ

0 = 0 ,

V φ

N ≥ 0 ,

P{V φ

N > 0} > 0

  • Consider for simplicity maximization of terminal utility

max

{φ self fin}

EV0=v

  • U(V φ

N)

  • → If ∃ optimal solution φ∗ of this problem, then there cannot be

(OA)

slide-38
SLIDE 38

Proof :

  • Given φ∗, let φ be an arbitrage portfolio and put

¯ φ = φ∗ + φ ⇒ V

¯ φ N = V φ∗ N + V φ N (V ¯ φ 0 = v)

  • From the assumption on φ, V φ

N ≥ 0 , P{V φ N > 0} > 0

⇒ V

¯ φ N ≥ V φ∗ N

with P{V

¯ φ N > V φ∗ N } > 0

  • Since U(·) is monotonically increasing

⇒ E{U(V

¯ φ N)} > E{U(V φ∗ N )}

contradicting the assumed optimality of φ∗.

slide-39
SLIDE 39
  • According to the 1st FTAP

AOA “ − →′′ ∃ MM Q i.e. there exists a numeraire Nn (reference asset/portfolio) s.t. EQ Sn Nn | Fm

  • = Sm

Nm , m < n → For an at most denumerable Ω, if φ∗ is solution

  • f

maxφ E{U(V φ

N)} then, for the numeraire Nn = Bn,

Q(ω) = P(ω) BNU ′(V φ∗

N )

E

  • BNU ′(V φ∗

N )

slide-40
SLIDE 40

Changing numeraire − → change MM Question : is there a numeraire s.t. for the corresponding Q it holds Q = P ? → A portfolio that, if used as numeraire, has the above property is called numeraire portfolio

slide-41
SLIDE 41
  • Log-optimal portfolio

φ∗ s.t. max

φ

EV0=v{log V φ

N} = EV0=v{log V φ∗ N }

→ a log-optimal portfolio is also growth-optimal in the sense that it maximizes the “growth rate” Theorem : A log (growth) optimal portfolio is a numeraire portfolio.

slide-42
SLIDE 42

Proof : (for Ω denumerable)

  • It will be shown later that the log-optimal portfolio value is

V ∗

n = V φ∗ n

= vBnL−1

n ;

  • Ln = E

dQ dP | Fn

  • , Q MM for num. B
  • Let Q∗ be the MM for the numeraire V ∗

N, then

L∗

N = dQ∗

dQ = V ∗

NB0

V ∗

0 BN

(B0 = 1 , V0 = v) − → Q∗ = Q V ∗

N

vBN = P BNU′(V ∗

N)

E{BNU′(V ∗

N)}

V ∗

N

vBN

= P BN(V ∗

N)−1V ∗ N

E{BNv−1B−1

N LN}vBN

= P

slide-43
SLIDE 43

Solution methodologies

  • Like for general stochastic control problems, also for those arising

in finance a natural solution approach is based on Dynamic Programming (DP).

  • An alternative method, developed mainly in connection with

financial applications, is the so-called martingale method (MM).

slide-44
SLIDE 44

Dynamic Programming (discrete time)

  • Recalling Vn+1 = Gn(Vn, πn, cn, ξn+1) ; ξn i.i.d.

→ if πn = π(Vn), cn = c(Vn) are Markov controls then Vn is Markov

  • Objective :

max

(π0,c0),··· ,(πN,cN) EV0

N

  • n=0

U(Vn, πn, cn)

slide-45
SLIDE 45

Dynamic Programming Principle (DP)

  • If a process is optimal over an entire sequence of periods, then

it has to be optimal over each single period.

  • Allows to obtain an optimal control sequence by a sequence of
  • ptimizations over the individual controls.
slide-46
SLIDE 46

Application of the DP principle Using adaptedness of (πn, cn) and Markovianity of Vn (for illustration the case of N = 2 and with only πn as controls) max

π0,π1,π2 E{U(V0, π0) + U(V1, π1) + U(V2, π2)} = (DP)

= max

π0,π1 E{U(V0, π0) + U(V1, π1) + max π2 U(V2, π2)} = (Markov)

=maxπ0,π1 E{U(V0,π0)+[U(V1,π1)+E{maxπ2 U(V2,π2)|(V1,π1)}]}=(DP+M) =maxπ0 E{U(V0,π0)+E{maxπ1 U(V1,π1)+E{maxπ2 U(V2,π2)|(V1,π1)}}|(V0,π0)}

slide-47
SLIDE 47

Implementation of the DP principle

  • Let (optimal cost-to-go)

Φn(v) := max

πn,··· ,πN E

N

  • m=n

U(Vm, πm) | Vn = v

  • → the DP principle then leads to the backwards recursions (DP-

algorithm)        ΦN(v) = max

πN U(v, πN)

Φn(v) = max

πn [U(v, πn) + E {Φn+1(G(Vn, πn, ξn+1)) | Vn = v}]

→ leads to a sequence of individual maximizations; one obtains automatically Markov controls (πn as function only of Vn = v)

slide-48
SLIDE 48
  • The DP algorithm can be used for numerical calculations if ξn is

finite-valued

  • It can however also be used to obtain some explicit expressions

as will be illustrated for the following example (scalar S and, for simplicity, only terminal utility and cn = rn = 0)      Gn(Vn, πn, ξn+1) = Vn [1 + πn(ξn+1 − 1)] max

π0,··· ,πN E{U(VN)}

slide-49
SLIDE 49

→ If U(v) = log v (log-utility) and ξn is binomial (ξn ∈ {u, d}), then Φn(v) = log v + kn with kn = (N − n)

  • p log
  • p

q

  • + (1 − p) log
  • 1−p

1−q

  • q = 1−d

u−d (d < 1 < u)

and π∗

n =

p − q (u − d)q(1 − q) (investing a constant ratio)

slide-50
SLIDE 50

Proof (by induction)

  • True for n = N
  • Assume true for n + 1,then

Φn(v) = max

π

E{log v + log(1 + π(ξn+1 − 1)) + kn+1} where E{log(1+π(ξn+1−1))} = p log(1+π(n−1))+(1−p) log(1+π(d−1)) Imposing ∂ ∂πE{log(1+π(ξn+1−1))} = p(u − 1) 1 + π(u − 1)+(1 − p)(d − 1) 1 + π(d − 1) = 0 leads to the required π∗; replacing the latter in the previous expression allows then to conclude.

slide-51
SLIDE 51
  • A constant investment fraction results only for log-utility
  • Taking U(v) = 1 − e−v one has, in the binomial case,

       Φn(v) = 1 − kne−v π∗

n =

1 v(u − d) log p(1 − q) q(1 − p)

slide-52
SLIDE 52

Dynamic Programming (continuous time) Heuristic derivation of the HJB equation (from discrete to continuous time)      dSt = St[a dt + σ dwt] ; Bt ≡ 1 dVt = φtdSt − ctdt → with Zt := log St and πt := φtSt

Vt

     dZt =

  • a − σ2

2

  • dt + σ dwt

dVt = (Vtaπt − ct)dt + Vtσπtdwt

slide-53
SLIDE 53

→ Putting Yt := [Zt, Vt]′ ; Πt := [πt, ct] one obtains a control problem of the following general form          dYt = At(Yt, Πt)dt + Bt(Yt, Πt)dwt ; t ∈ [0, T] sup

Π

Y0

T U1(t, Yt, Πt)dt + U2(YT, ΠT)

  • where Πt ∈ FY

t .

slide-54
SLIDE 54
  • Apply an Euler-type discretization (step ∆)

                 Yt+∆ = Yt + At(Yt, Πt)∆ + Bt(Yt, Πt)∆wt := Gt(Yt, Πt, ∆wt) sup

Π

Y0

  ∆

T/∆

  • t=0

Ut(Yt, Πt)    and recall that Φt(y) = sup

Π

[∆ Ut(y, Π) + E {Φt+1 (Gt(y, Π, ∆wt)) | Yt = y}]

slide-55
SLIDE 55

Via a Taylor expansion and taking into account that E(∆wt)2 ≈ ∆ sup

Π

[∆ Ut(y, Π) + E {Φt+1 − Φt(y) | Yt = y}] = ∂

∂tΦt(y)∆ + supΠ

  • ∆Ut(y, Π) + ∆At(y, Π) ∂

∂yΦt(y)

+∆

2 B2 t (y, Π) ∂2 ∂y2Φt(y) + o(∆)

  • = 0
slide-56
SLIDE 56

→ Dividing by ∆ and letting ∆ ↓ 0,                 

∂ ∂tΦt(y) + supΠ

  • At(y, Π) ∂

∂yΦt(y)

+1

2B2 t (y, Π) ∂2 ∂y2Φt(y) + Ut(y, Π)

  • = 0

ΦT(y) = supΠ UT(y, Π)

slide-57
SLIDE 57

Standard heuristic derivation (based on the DP principle) Φt(y) = sup

Πs, s∈[t,t+∆]

E t+∆

t

Us(Y Π

s , Πs)ds

+Φt+∆(Y Π

t+∆) | Yt = y

  • and then proceed analogously as before.
slide-58
SLIDE 58

Solution procedure i) Solve the maximization over Π depending on the yet unknown Φt(y). ii) Insert the maximizing value Π∗(t, y) and solve the resulting PDE. → A “verification theorem” guarantees, under sufficient regularity (classical solution), the optimality of the resulting Π∗(t, y) and Φt(y). → In the absence of sufficient regularity : viscosity solution. → Explicit analytical solutions only in particular cases (e.g. linear- quadratic Gaussian).

slide-59
SLIDE 59

Standard classical optimization problem (maximization of expected utility from consumption and terminal wealth) Neglecting transaction costs but considering as additional control variable ct that represents the rate of consumption at time t :          dV π,c

t

= V π,c

t

[rtdt + π′

t(At − rt1)dt + π′ tΣtdwt] − ctdt

max

π,c EV0

T U1(t, ct)dt + U2(V π,c

T

)

  • with U1(·) and U2(·) utility functions from consumption and

terminal wealth respectively that satisfy the usual assumptions.

slide-60
SLIDE 60
  • Put, for t ∈ [0, T],

Jπ,c(t, v) := Eπ,c T

t

U1(s, cs)ds + U2(V π,c

T

) | Vt = v

  • and let (value function)

Φ(t, v) := sup

π,c Jπ,c(t, v)

slide-61
SLIDE 61

HJB equation                  ∂Φ ∂t (t, v) + sup

π,c

  • [vrt − c + vπ′(At − rt1)] ∂Φ

∂v (t, v) +1 2v2||π′Σt||2∂2Φ ∂v2 (t, v) + U1(t, c)

  • = 0

Φ(T, v) = U2(v) , Φ(t, 0) = 0 →        c∗

t = I1

∂Φ

∂v(t, v), t

  • (I1(·) inverse of U ′

1(·))

π∗

t = − [ΣtΣ′ t]−1 [At − rt¯

1] ∂Φ

∂v(t, v)

  • v ∂2Φ

∂v2(t, v)

−1

slide-62
SLIDE 62
  • After substituting (π∗, c∗) for (π, c) one is left with a PDE :

explicit solutions can be obtained only in specific cases (mainly in Insurance applications); regularity results are also required.

  • Qualitative results are possible as e.g.

the “Mutual fund theorem” : the optimal portfolio consists of an allocation between two fixed mutual funds.

  • The invertibility of ΣtΣ′

t is equivalent to completeness of the

market (recall that, if an optimal solution exists, there cannot be arbitrage but the market may be incomplete).

slide-63
SLIDE 63

Approximations

  • If analytical solutions are not possible : approximations (here an
  • utline of a methodology based on work by H.Kushner).
  • Use the HJB equation only as an indication for finding an

appropriate time and space discretization (V δ

t ) of (Vt) such that

(V δ

t ) δ→0

⇒ Vt in distribution with (V δ

t ) a continuous time interpolation of a discrete time and

finite valued process.

slide-64
SLIDE 64
  • Letting Jπ,c

δ

(t, v) be the corresponding expected remaining cumulative utility at time t, assume furthermore that | Jπ,c

δ

(0, v) − Jπ,c(0, v) |≤ Gδ with Gδ not depending on (π, c) and such that limδ→0 Gδ = 0

slide-65
SLIDE 65

Then i) | supπ,c Jπ,c

δ

(0, v) − supπ,c Jπ,c(0, v) |≤ Gδ ii) Let (πδ, cδ) be the optimal strategy of the approximating problem and let it denote also its interpolation in order to apply it to the original problem. Then | sup

π,c Jπ,c(0, v) − Jπδ,cδ(0, v) |≤ 2 Gδ

slide-66
SLIDE 66

General underlying approximation philosophy Approximate the original problem by a sequence of problems such that the last one is explicitly solvable and show that the corresponding solution (suitably extended to be applicable in the

  • riginal problem) is nearly optimal in the original problem.

→ The control computed from the approximating problem may even be simpler to apply in practice.

slide-67
SLIDE 67

Martingale method (discrete time) Preliminaries

  • Q is a martingale measure if, for a given numeraire Nn,

EQ Sn Nn | Fm

  • = Sm

Nm , m < n

  • With ˜

Sn := N −1

n Sn

EQ{ ˜ Sn | Fm} = ˜ Sm ⇔ EQ{∆n

m ˜

Sm | Fm} = 0 → Usually Nn = Bn (locally riskless asset)

slide-68
SLIDE 68
  • The self financing condition is

φ0

nBn + K

  • i=1

φi

nSi n = φ0 n+1Bn + K

  • i=1

φi

n+1Si n + cn

which, with Nn = Bn becomes φ0

n + K

  • i=1

φi

n ˜

Si

n = φ0 n+1 + K

  • i=1

φi

n+1 ˜

Si

n + ˜

cn → ˜ Vn+1 = φ0

n+1 + K

  • i=1

φi

n+1 ˜

Si

n+1 = ˜

Vn +

K

  • i=1

φi

n+1∆ ˜

Si

n − ˜

cn ⇒ (for ˜ cn = 0) EQ ˜ Vn+1 | Fn

  • = ˜

Vn i.e. the discounted values of a self financing portfolio are (Q, Fn)−martingales.

slide-69
SLIDE 69
  • Recall the hedging problem :

Given HN ∈ FS

N, determine V0 = v and a self financing strategy

φ (no consumption) s.t. V φ

N = HN

a.s. ˜ V φ

N = ˜

HN a.s.

  • Since ˜

V φ

n is a Q−martingale for any martingale measure Q,

V0 = ˜ V0 = EQ{ ˜ V φ

N} = EQ{ ˜

HN} and this determines the initial wealth V0 = v.

slide-70
SLIDE 70

Determining the hedging strategy corresponds to a martingale representation problem. i) Define ˜ Mn := EQ{ ˜ HN | Fn} which is a (Q, Fn)−martingale

(EQ{ ˜ Mn|Fm}=EQ{EQ{ ˜ HN|Fn}|Fm}=EQ{ ˜ HN|Fm}= ˜ Mm)

ii) Determine ¯ φn s.t., with V0 = v and for ˜ V

¯ φ n = V0 + n

  • m=0

K

  • i=1

¯ φi

m+1∆ ˜

Si

m

  • ne has ˜

Mn = ˜ Vn (representing the martingale ˜ Mn in the form

  • f ˜

Vn) → ¯ φn is then the hedging strategy

slide-71
SLIDE 71

Martingale method (discrete time) Methodology (only terminal utility; no consumption)

  • 1. Given V0 = v, determine the set of reachable terminal wealths

VN, i.e. Vv :=

  • V | V = V φ

N for φ self financing and V0 = v

  • 2. Determine the optimal terminal wealth V ∗

N

E{U(V ∗

N)} ≥ E{U(VN)}

∀ VN ∈ Vv

  • 3. Determine a self financing strategy φ∗ s.t.

V φ∗

N = V ∗ N

(corresponds to hedging the “claim” HN = V ∗

N)

slide-72
SLIDE 72
  • Solving i) : Vv is the set of all VN s.t.

EQ{ ˜ VN} = v ∀ MM Q → If the set of all MM’s is a convex polyhedron with a finite number of “vertices” Qj (j = 1, · · · , J), then the condition becomes EQj{ ˜ VN} = v ; j = 1, · · · , J

slide-73
SLIDE 73
  • Solving ii) i.e.

max

V ∈Vv E{U(V )} =

max

{V |EQj{ ˜ V }=v; j=1,·,J}

E{U(V )} → Use Lagrange multiplier method with Lj := dQj dP so that EQj{ ˜ V } = EQ{ ˜ V Lj} and one has max

V

E   U(V ) −

J

  • j=1

λjB−1

N V Lj

   → U ′(V ) =

J

  • j=1

λjB−1

N Lj

slide-74
SLIDE 74
  • Putting I(·) = (U ′(·))−1 it follows that

V ∗

N = I

 

J

  • j=1

λjB−1

N Lj

  with λj satisfying the system of budget equations v = E

  • B−1

N V ∗ NLj

= E   B−1

N LjI

 

J

  • j=1

λjB−1

N Lj

     for j = 1, · · · , J.

slide-75
SLIDE 75

Example : U(v) = log v → I(y) = y−1 → In a complete market (a single MM Q) the budget equation becomes v = λ−1 ↔ λ = v−1 and V ∗

N = v BN

L with L = dQ dP

slide-76
SLIDE 76
  • In a binomial market model with νn denoting the total random

number of up-movements L = dQ dP (νn) = q p νn 1 − q 1 − p N−νn → (for simplicity rn = 0 i.e. Bn = 1) V ∗

N = v

p q νn 1 − p 1 − q N−νn and (recall E{νn} = Np) E{U(V ∗

N)} = log v + N

  • p log

p q

  • + (1 − p) log

1 − p 1 − q

  • → compare with DP; similarly for the strategies.
slide-77
SLIDE 77

Martingale method (discrete time) Methodology (terminal utility with consumption) Definition: An investment/consumption strategy (φ, c) is admissible if cN ≤ VN.

slide-78
SLIDE 78
  • Recalling that, allowing also for consumption, the self financing

condition reads as follows ˜ Vn = V0 +

n−1

  • m=0

K

  • i=1

φi

m+1∆ ˜

Si

m − n−1

  • m=0

˜ cm we give also the following Definition: An investment/consumption strategy (φ, c) is attainable from the initial endowment V0 = v if (letting the set of MM’s be a convex polyhedron with J vertices), v = EQj ˜ c0 + · · · + ˜ cN−1 + ˜ VN

  • ,

∀ j = 1, · · · , J

slide-79
SLIDE 79

Procedure i) Determine the set of attainable consumption processes and terminal wealths. ii) Determine the optimal attainable consumption and terminal wealth. iii) Determine an investment strategy that allows to consume according to the optimal consumption process. Solving i) : see definition of attainability.

slide-80
SLIDE 80

Solving ii) : max

c,Vn E

N

  • n=0

Uc(cn) + Up(VN − cN)

  • with the following budget equations where N j

n := B−1 n E{Lj | Fn}

v = EQj N−1

n=0 ˜

cn + ˜ VN

  • = E
  • Lj N−1

n=0 ˜

cn + ˜ VN

  • =

E N−1

n=0 E

  • B−1

n cnLj | Fn

  • + E
  • B−1

N VNLj | FN

  • =

E N−1

n=0 cnN j n + VNN j N

  • ; ∀ j = 1, · · · , J

Having Uc(c) = −∞ for c < 0; Up(v) = −∞ for v < 0 guarantees cn ≥ 0, cN ≤ VN → admissibility.

slide-81
SLIDE 81

Lagrange multiplier technique max E   

N

  • n=0

Uc(cn) + Up(VN − cN) −

J

  • j=1

λj N−1

  • n=0

cnN j

n + VNN j N

   ⇒              U ′

c(cn)

= J

j=1 λjN j n ;

n = 0, · · · , N − 1 U ′

c(cN)

= U ′

p(Vn − cn)

U ′

p(Vn − cn)

= J

j=1 λjN j n

slide-82
SLIDE 82

⇒        cn = Ic J

j=1 λjN j n

  • ;

n = 0, · · · , N VN = Ip J

j=1 λjN j N

  • + Ic

J

j=1 λjN j N

  • with the budget equation

v = E   

N

  • n=0

N j

nIc

 

J

  • j=1

λjN j

n

  + N j

NIp

 

J

  • j=1

λjN j

N

     for j = 1, · · · , J.

slide-83
SLIDE 83

Martingale approach (continuous time) Preliminaries: determining the hedging strategy in a complete market (martingale representation) (P) dSt = (diag St) Atdt + (diag St) Σtdwt , Σt invertible → Want a measure Q ∼ P s.t. (Q) dSt = (diag St) rt1dt + (diag St) ΣtdwQ

t

→ ˜ Si

t := B−1 t Si t

satisfy d ˜ St =

  • diag ˜

St

  • ΣtdwQ

t

i.e. ˜ St is a (Q, Ft)−martingale (Q is a Martingale measure (MM)).

slide-84
SLIDE 84

→ The comparison of the two representations implies dwQ

t = dwt + θtdt

where θt := Σ−1

t (At − rt1)

i.e. Q is obtained from P by a Girsanov-type measure transformation implying a translation of the Wiener process wt by θt. → For the given model a MM exists and is unique. L = dQ dP = exp

T θ′

tdwt − 1

2 T θ′

tθtdt

slide-85
SLIDE 85
  • From the self financing condition

dVt = φ0

tdBt + N

  • i=1

φi

tdSi t

putting ˜ Vt := B−1

t Vt, one has

d ˜ Vt = φtd ˜ St = φt(diag ˜ St)ΣtdwQ

t

i.e., under Q, also ˜ Vt is a martingale with ˜ Vt = ˜ V0 + t φs(diag ˜ Ss)ΣsdwQ

s

and the problem is to possibly find ˜ V0 and φt s.t. ˜ VT = B−1

T HT

a.s.

slide-86
SLIDE 86
  • Consider the following

(Q, Ft)−martingale (assume HT = H(ST) and put ˜ HT := B−1

T HT)

˜ Mt := EQ{ ˜ HT| Ft} = EQ{ ˜ HT| ˜ St} := F(t, ˜ St) ⇒ The problem is solved if we find ˜ V0 and φt s.t. ˜ Vt = ˜ Mt a.s. (need martingale representation for ˜ Mt).

  • By Ito’s rule

d ˜ Mt = dF(t, ˜ St) =

  • Ft(·) + 1

2tr{(diag ˜

St)ΣtΣ′

t(diag ˜

St)} Fss(·)

  • dt

+ Fs(·)(diag ˜ St)ΣtdwQ

t

slide-87
SLIDE 87
  • Since ˜

Mt is a martingale,      Ft(t, s) + 1

2tr{(diag s)ΣtΣ′ t(diag s)} Fss(t, s) = 0

F(T, s) = ˜ H(s) and one has the explicit martingale representation ˜ Mt = ˜ M0 + t Fs(t, ˜ St)(diag ˜ St)ΣtdwQ

t

The problem is thus solved by choosing      ˜ V0 = ˜ M0 = EQ{ ˜ HT} φt = Fs(t, ˜ St)

slide-88
SLIDE 88

Basic idea of the martingale method Two steps : i) Determine the optimal value of the cost functional that, for a given initial capital V0, can be reached by a self financing portfolio (static optimization under a constraint) ii) Determine the control/strategy that achieves this optimal value. → For step ii) use martingale representation → To solve the (static) problem in point i) more possibilities, e.g.:

  • Method based on Lagrange multipliers;
  • method based on convex duality.
slide-89
SLIDE 89

Lagrange multiplier method max

V ∈Vv E{U(V )}

with Vv =

  • V | EQ{B−1

T V } = v

  • leads then to

max

V

  • E{U(V )} − λEQ{B−1

T V }

  • = max

V

E

  • U(V ) − λLB−1

T V

slide-90
SLIDE 90

Example: U(v) = log v ; single MM Q , Bt = 1 In this case I(y) = y−1 → λ = v−1 → V ∗

T = v

L with L = dQ dP = exp

T θ′

tdwt − 1

2 T θ′

tθtdt

  • where θt = Σ−1

t (At − r1),

and therefore E{log V ∗

T }

= log v + E T

0 θ′ tdwt + 1 2

T

0 θ′ tθtdt

  • =

log v + 1 2 T (A′

t − 1rt) Σ−2 t

(At − rt1) dt

slide-91
SLIDE 91
  • The optimal investment strategy is now determined as the

hedging strategy for the claim HT = V ∗

T = vL−1.

→ Since Bt ≡ 1, all quantities are automatically already discounted and so, under the unique MM Q one has dSt = (diagSt) ΣtdwQ

t

; dVt = Vtπ′

tΣtdwQ t

  • Determine now πt such that the Q−martingale Vt matches the

following Q−martingale Mt : Mt := EQ{vL−1 | Ft} → need a representation of L−1 under Q

slide-92
SLIDE 92
  • From L = dQ

dP = exp

T

0 θ′ tdwt − 1 2

T

0 θ′ tθtdt

  • and using the

fact that dwQ

t = dwt + θtdt, one has

L−1 = dP

dQ

= exp T

0 θ′ tdwt + 1 2

T

0 θ′ tθtdt

  • =

exp T

0 θ′ tdwQ t − 1 2

T

0 θ′ tθtdt

       L−1

t

:= EQ{L−1 | Ft} = exp t

0 θ′ sdwQ s − 1 2

t

0 θ′ sθsds

  • dL−1

t

= L−1

t θ′ tdwQ t

slide-93
SLIDE 93
  • One can now write

Mt := EQ{vL−1 | Ft} = vL−1

t

and dMt = vL−1

t θ′ tdwQ t = Mtθ′ tdwQ t

From      dVt = Vtπ′

tΣtdwQ t

  • ne then has

dMt = Mtθ′

tdwQ t

→ π′

tΣt = θ′ t

→ πt = Σ−1

t θt

  • Recalling that θt := Σ−1

t (At−rt1) = Σ−1 t At, (Bt ≡ 1 ⇒ rt = 0),

  • ne finally has that

πt = Σ−2

t At

which is constant if At and Σt do not depend on time.

slide-94
SLIDE 94

Discussion of DP vs MM

  • DP is based (in continuous time) on HJB : first one determines

the optimal control as a function of the (yet unknown) optimal value; substituting this back into HJB one obtains a nonlinear PDE that leads to the optimal value. In MM the opposite : first one determines the optimal value without reference to the control and then the optimal strategy is determined as a strategy that leads to this optimal value.

slide-95
SLIDE 95
  • DP is a fully dynamic procedure by which, provided that the

state process is Markovian and the cost is additive over time, the

  • ptimization over time is reduced to a parameter optimization.

MM is a more static procedure and, in fact, it does not require

  • Markovianity. In general, MM has however a narrower field of

applicability.

  • The dynamic structure of DP makes it better suited to deal with

problems with partial/incomplete information.

  • Explicit solutions are not easy to obtain by either of the methods.

For DP there exist approximation methods which is not so much the case with MM.

slide-96
SLIDE 96

Incomplete information/model uncertainty

To obtain an optimal solution for a financial problem one needs a

  • model. The model may not be perfectly known; on the other

hand, the solution may be rather sensitive to the model. → Problem of model uncertainty (model risk) → In what follows three possible approaches for hedging and utility maximization under model uncertainty.

slide-97
SLIDE 97

Min-max approach It is a natural approach, but rather conservative in that it protects against the worst case scenario.

  • Letting P be a family of possible “real world probability

measures” (ambiguity set), consider the following criterion related to the shortfall risk minimization for the hedging problem inf

π

sup

P ∈P

EP

S0,V0

  • L
  • (HT − V π

T )+

→ may be considered as upper value of a fictitious game between the market and the agent.

slide-98
SLIDE 98

Question : Does this game have a value, i.e. does the upper value coincide with the lower max-min value sup

P ∈P

inf

π EP S0,V0

  • L
  • (HT − V π

T )+

? Answer : (in general) yes ! → This approach requires in general a large initial capital and it does not easily allow to incorporate successive information that becomes available by observing the market.

slide-99
SLIDE 99

Adaptive approaches (stochastic control under partial information; stochastic adaptive control)

  • Consider parametrized families of models and update successively

the knowledge about the parameters on the basis of observed prices. → Bayesian point of view : updating the knowledge of the parameters ≡ updating their distributions. → The unknown quantities may also be hidden processes ⇒ combined filtering and parameter estimation.

slide-100
SLIDE 100
  • A. A first discrete time case
  • Underlying market model (only one risky asset)
  • Start from a classical price evolution model in continuous time

that we define under the physical measure P : dSt = St[adt + Xtdwt] with wt a Wiener process and where Xt is the non directly

  • bservable volatility process (factor).
  • For Yt := log St one then has

dYt =

  • a − 1

2X2

t

  • dt + Xtdwt
slide-101
SLIDE 101
  • Passing to discrete time (deterministic time points with step δ),

let for n = 0, · · · , N              Xn : Markov chain with m states x1, · · · , xm (generally resulting from a discretization of Xt). Yn = Yn−1 +

  • a − 1

2X2 n−1

  • δ + Xn−1

√ δεn with εn i.i.d. ∼ N(0, 1). (Euler scheme). → The pair (Xn, Yn) is Markov with Xn is unobservable factor (volatility), Yn are the observations (log-prices).

slide-102
SLIDE 102
  • More generally, consider

Yn = Gn(Xn−1, Yn−1, Xn, εn) and assume that the distribution

  • f

Yn conditional

  • n

(Xn−1, Yn−1, Xn) has a bounded and known density y′ → gn(Xn−1, Yn−1, Xn, y′)

slide-103
SLIDE 103

Portfolio optimization and hedging

  • Consider an investor who can trade at any time n ≤ N a

number φn ∈ A ⊂ R of shares in the stock, investing the rest (its monetary amount being denoted by βn) in a riskless asset with constant interest rate r.

  • The corresponding (self financing) wealth process then satisfies

V φ

n+1 = V φ n erδ + φn

  • eYn+1 − eYnerδ

V φ

n+1

=

φn+1Sn+1+βn+1=(self-fin.)=φnSn+1+βneδr

=

(φnSn+βn)eδr+φn(Sn+1−Sneδr)=Vneδr+φn(eYn+1−eYneδr)

slide-104
SLIDE 104

More generally, consider V φ

n+1 = F

  • V φ

n , φn, Yn, Yn+1

  • ;

n = 0, · · · , N − 1 with φn adapted to FY

n = σ{Y0, · · · , Yn}, (class A).

slide-105
SLIDE 105
  • Given a time horizon N, as control criterion consider

Jopt(V0) = inf

φ∈A J(V0, φ)

= inf

φ∈A E

N−1

  • n=0

fn(Xn, Yn, V φ

n , φn) + ℓ(XN, YN, V φ N)

  • which includes portfolio optimization and hedging in incomplete
  • markets. (By some abuse of notation we denote here by φ a

strategy φ = (φ0, · · · , φN−1) ∈ A). → It is a stochastic control problem under partial/incomplete information.

slide-106
SLIDE 106

→ When hedging a payoff h(YN) at maturity N, take fn(·) ≡ 0 and

  • in case of mean-variance hedging

ℓ(XN, YN, VN) = (h(YN) − VN)2

  • in case of shortfall risk minimization

ℓ(XN, YN, VN) = (h(YN) − VN)+

slide-107
SLIDE 107

Transition to complete information

  • A standard approach to optimization problems under partial

information is to transform them into complete information

  • nes by replacing the unobserved state variables Xn by their

conditional distributions given past and present observations of Y (filter distribution).

  • Let FY

n be the filtration generated by Yj, ( j ≤ n), and let

Πi

n := P

  • Xn = xi | FY

n

  • ;

i = 1, · · · , m

slide-108
SLIDE 108
  • By the Markovianity of Xn (denote by P ij

n

its transition probability matrix in period n) and by Bayes’ formula one has (filter dynamics)        Π0 = µ Πn = ¯ Hn(Πn−1, Yn−1, Yn) := Hn(Yn−1, Yn)′Πn−1 | Hn(Yn−1, Yn)′Πn−1| where µ is the (known) distribution of X0 and Hij

n (Yn−1, Yn) = gn(xi, Yn−1, xj, Yn) P ij n

slide-109
SLIDE 109
  • On the other hand let Qn(π, y, dy′) be the law of Yn conditional
  • n (Πn−1, Yn−1) = (π, y) with density

y′ →

m

  • i,j=1

gn(xi, y, xj, y′)P ij

n πi

→ (Πn, Yn) is a sufficient statistic and an FY

n −Markov process.

slide-110
SLIDE 110

Solution approach

  • By iterated conditional expectations and putting

           ˆ fn(π, y, v, φ) =

m

  • i=1

fn(xi, y, v, φ)πi ˆ ℓ(π, y, v) =

m

  • i=1

ℓ(xi, y, v)πi

  • ne has

J(V0, φ) =

E{ PN−1

n=0 E{fn(Xn,Yn,V φ n ,φn)|FY n }

+E n ℓ(XN,YN,V φ

N)|FY N

  • =

E N−1

  • n=0

ˆ fn(Πn, Yn, V φ

n , φn) + ˆ

ℓ(ΠN, YN, V φ

N)

slide-111
SLIDE 111
  • By the Markovianity of Zn = (Πn, Yn) and the Dynamic

Programming approach, defining recursively the functions            uN(π, y, v) = ˆ ℓ(π, y, v) un(π, y, v) = inf

φ∈A

  • ˆ

fn(π, y, v, φ)+ +E {un+1(Πn+1, Yn+1, F(v, φ, y, Yn+1)) | (Πn, Yn) = (π, y)}] (where φ here refers to the generic decision φ = φn in period n)

  • ne has

u0(µ, Y0, V0) = Jopt(V0) → Requires the conditional law of Zn+1 given Zn.

slide-112
SLIDE 112

→ Even if Xn is m−valued, Πn takes values in the m−dimensional, ∞−valued simplex Nm =

  • π = (πi)1≤i≤m | πi ≥ 0,
  • i

πi = 1

  • → To be able to perform actual computations one needs an

approximation leading to a finite-valued process Zn = (Πn, Yn).

slide-113
SLIDE 113

Approximations

  • The filter distribution Πn in the generic period n may be seen as

a sufficient statistic for (Y0, · · · , Yn) and one may express this by writing Πn = Πn(Y0, · · · , Yn).

  • A

basic traditional approximation approach consists in approximating each Yj, (j ≤ n), by a discrete r.v. ˆ Yj and then approximate Πn(Y0, · · · , Yn) by Πn( ˆ Y0, · · · , ˆ Yn). → Problem: the number of possible values grows exponentially with n (if ˆ Yj takes M values, then in period n one has M n possible values of Πn( ˆ Y0, · · · , ˆ Yn).

slide-114
SLIDE 114
  • Alternatively:

Given a maximum number K of acceptable discrete values, we perform a quantization of the Markov process Zn := (Πn, Yn) that leads to its best L2−approximation by a discrete Markov process ˆ Zn = (ˆ Πn, ˆ Yn) where each Zn takes at most K values. → This approximation of Zn then induces corresponding approximations in the optimization problem.

slide-115
SLIDE 115
  • Pag`

es G., Pham H. and J.Printemps (2004), “Optimal quantization methods and applications to numerical problems in finance”, Handbook of computational and numerical methods in finance, (S.Rachev, ed.), Birkh¨ auser Verlag.

  • Pham H., W.Runggaldier and A.Sellami (2005), “Approximation

by quantization of the filter process and applications to optimal stopping problems under partial observation”, Monte Carlo Methods and Applications, 11, pp. 57–82.

  • Corsi,M., Pham H. and W.Runggaldier (2006), “Numerical

Approximation by Quantization of Control Problems in Finance under Partial Observations”. To appear in : Mathematical Modelling and Numerical Methods in Finance. Handbook of Numerical Analysis, Vol XV. (A.Bensoussan, Q.Zhang, eds.).

slide-116
SLIDE 116
  • B. A second discrete time case
  • The multinomial case
  • Recall

Yn = Yn−1 +

  • a − 1

2X2

n−1

  • δ + Xn−1

√ δεn := Yn−1 + ξn with ξn (conditionally on X) i.i.d. Gaussian.

  • Let now

Yn = Yn−1 + ξn with ξn i.i.d. multinomial ξn ∈ {ξ1, · · · , ξM} with probability q = (q1, · · · , qM) → Incomplete information: q unknown.

slide-117
SLIDE 117

DP under complete and incomplete information

  • Recall (complete information about q)

            

ΦN(v)=maxπN U(v,πN) Φn(v)=maxπn[U(v,πn)+E{Φn+1(G(Vn,πn,ξn+1))|Vn=v}]

=maxπn[U(v,πn)+PM

m=1 qmΦn+1(G(Vn,πn,ξm))|Vn=v]

  • Not knowing q (knowing however FY

n equiv. Y n 0 ) Φn(v,Y n

0 )=maxπn[U(v,πn)+PM m=1 E{qm|Y n 0 }Φn+1(G(v,πn,ξm))]

→ Need only Bayesian updating of E {qm | Y n

0 }

slide-118
SLIDE 118
  • C. A continuous time case
  • Utility from terminal wealth; no consumption
  • Given is the financial model (rt = 0 ⇒ Bt = const.)

     dSt = St [at(St, Xt)dt + σt(St)dwt] dXt = Ft(Xt)dt + Rt(Xt)dMt dVt = Vt [πtat(Xt)dt + πtσtdwt] (Recall that, by self financing, dVt

Vt = πt dSt St )

→ Mt a martingale independent of wt → σt is independent of Xt : in continuous time t

0 σ2 sds can be

estimated by the empirical quadratic variation (dependence on Xt ⇒ filter degenerates)

slide-119
SLIDE 119
  • Putting Zt := log St, consider the (specific) problem

                       dXt = Ft(Xt)dt + Rt(Xt)dMt (unobserved) dZt = At(Zt, Xt)dt + B(Zt)dwt (observed) dVt = Vt

  • πt
  • At(Zt, Xt) + 1

2B2 t (Zt)

  • dt + πtBt(Zt)dwt
  • sup

π E {V µ T } ,

µ ∈ (0, 1)

  • dVt

Vt =πt dSt St =πt deZt eZt =πt[(At(Zt,Xt)+1 2B2 t (Zt))dt+Bt(Zt)dwt]

slide-120
SLIDE 120

Reformulation of the incomplete information problem (“separated problem”)

  • Take as new “state”

Ψt = pt(x) = p(Xt | FZ

t )| Xt=x

→ filter distribution of Xt given FZ

t

  • For φt = φt(Xt) let

pt(φ) := E

  • φ(Xt) | FZ

t

  • =
  • φ(x)dpt(x)
slide-121
SLIDE 121
  • Putting

At(Zt, pt) := pt(At) =

  • At(Zt, x)dpt(x)

define the “innovations process” (a Wiener process in the filtration FZ

t ))

d ¯ wt := B−1

t (Zt) [dZt − pt(At)dt]

→ It implies a translation of the (P, Ft)−Wiener wt : d ¯ wt = dwt + B−1

t (Zt) [At(Zt, Xt) − At(Zt, pt)] dt

slide-122
SLIDE 122

and thus the implicit measure transformation P → ¯ P with d ¯ P dP | FT = = exp T [At(Zt, pt) − At(Zt, Xt)] B−1

t (Zt)dwt

−1

2

T [At(Zt, pt) − At(Zt, Xt)]2 B−2

t (Zt)dt

slide-123
SLIDE 123
  • Even if Xt is finite-dimensional, Ψt is in general ∞−dimensional.
  • DP (HJB-eqn.) difficult for an ∞−dimensional state Ψt

→ In some cases Ψt = pt(x) = p(Xt | FZ

t )| Xt=x is finitely

parametrized (finite-dimensional filter)

slide-124
SLIDE 124

Examples

  • Linear-Gaussian models

     dXt = FtXtdt + Rtdvt dZt = AtXtdt + Btdwt {vt}, {wt} independent standard Wiener. → p(Xt | FZ

t )| Xt=x ∼ N(mt, γt)

       dmt = Ftmtdt + γt

At Bt d ¯

wt ˙ γt = 2Ftγt − γ2

t

  • At

Bt

2 + R2

t

with d ¯ wt := B−1

t [dZt − Atmt dt]

slide-125
SLIDE 125
  • Xt : finite-state Markov with states {s1, · · · , sK}, i.e.

dXt = ΛtXtdt + dMt → Put φi(x) = 1 if x = si if x = si and let pi

t := P

  • Xt = si | FZ

t

slide-126
SLIDE 126

Furthermore, let Dt(Zt) := diag(At(Zt, s1), · · · , At(Zt, sK)) At(Zt) := [At(Zt, s1), · · · , At(Zt, sK)]′ then, with pt = [p1

t, · · · , pK t ]′,

dpt = Λtptdt+[Dt(Zt)−(A′

t(Zt)pt) I]ptB−2 t (Zt)(dZt−A′ t(Zt)ptdt)

→ pt evolves on a finite-dimensional simplex.

slide-127
SLIDE 127

Reformulation in case of a general finite-dimensional filter

  • Let

pt(x) = p(Xt | FZ

t )| Xt=x = p(x; ξt) ;

ξt ∈ Rp and suppose that dξt = βt(Zt, ξt)dt + δt(Zt, ξt) d ¯ wt

  • Putting

At(Zt, ξt) :=

  • At(Zt, x)dp(x; ξt)
  • n (Ω, F, Ft, ¯

P) with Wiener ¯ wt :

slide-128
SLIDE 128

                       dξt = βt(Zt, ξt)dt + δt(Zt, ξt) d ¯ wt dZt = At(Zt, ξt)dt + Bt(Zt)d ¯ wt dVt = Vt

  • πt
  • At(Zt, ξt) + 1

2B2 t (Zt)

  • dt + πtBt(Zt)d ¯

wt

  • sup

π

¯ E {V µ

T } ,

µ ∈ (0, 1) → It is the “separated problem” (equivalent complete information problem). With Yt := [Zt, ξt, Vt] it is of the form of the general complete information problem. → Other reformulations are possible e.g. as a risk sensitive control problem (Nagai-R.07).

slide-129
SLIDE 129

Hedging under incomplete information (quadratic criterion) For the quadratic hedging criterion min

π

ES0,V0

  • (HT − V π

T )2

  • ne has : if φ∗

t(Xt, Zt) is the optimal strategy under full

information then, under the partial information FZ

t the optimal

strategy is basically the projection E

  • φ∗

t(Xt, Zt) | FZ t

  • .

→ For mathematically less tractable criteria one may thus first

  • btain an optimal strategy corresponding to the quadratic

criterion and then evaluate by simulation its performance relative to the original criterion, possibly adjusting it heuristically.

slide-130
SLIDE 130

Robust approaches

  • Investigate the sensitivity of the solution with respect to the

model.

  • How reliable is the solution, obtained for a hypothetical model,

when applied to the real problem ?

slide-131
SLIDE 131

Assume that for a “real world probability measure” P the problem is                dXt = At(Xt, Zt, πt)dt + Σt(Xt, Zt, πt)dwt JP(X0, π∗) = inf

π JP(X0, π)

= inf

π EP X0

T c(Xt, πt)dt + C(XT)

  • where one may e.g. think of Xt as Xt = [St, It, Vt]′ and of Zt as

a hidden process.

slide-132
SLIDE 132

Assume that, not knowing the real measure P, one solves instead the same problem for a hypothetical measure Q :      dXt = At(Xt, Zt, πt)dt + Σt(Xt, Zt, πt)dwt JQ(X0, πQ) = inf

π EQ X0

T c(Xt, πt)dt + C(XT)

  • Problem : Find πQ and a bound (uniform in X0) on

JP(X0, πQ) − JP(X0, π∗) ≥ 0 i.e. on the suboptimality of πQ, when applied to the real problem, in terms of a measure of the difference between P and Q.

slide-133
SLIDE 133

Discontinuous Market Model

Kirch (03), K/R (04)

  • On (Ω, F, (Ft), P) and for t ∈ [0, T], let

     Bt ≡ 1

dSj

t

=

Sj

t−

»PM

i=1(eaji−1)dNi t

– =Sj

0 exp[

R t PM

i=1 ajidNi t]

=

Sj

0 exp[

PM

i=1 ajiNi t] , j=1,··· ,N

N i

t : Poisson without common jumps and (P, Ft)−intensity λi t

aji : deterministic constants and so FS

t = FN t

slide-134
SLIDE 134
  • A counting/point process Nt is a Poisson process if

i) N0 = 0; ii) Nt is a process with independent increments; iii) Nt − Ns is a Poisson random variable with a parameter Λs,t.

  • Usually Λs,t =

t

s λudu

→ λt : intensity of the Poisson (point) process → If Nt is a Poisson process with intensity λt = λ, then τn+1 −τn are i.i.d. exponential r.v.’s with parameter λ.

slide-135
SLIDE 135
  • λt may be itself an adapted process and this corresponds to a

two-step randomization procedure : i) draw at random a trajectory of λt; ii) generate a Poisson process Nt having that λt as intensity.

  • In this case one obtains a doubly stochastic Poisson process or a

Cox process.

  • Just as the Wiener process is the basic building block for

processes with continuous trajectories, the Poisson process is it for processes with jumping trajectories.

slide-136
SLIDE 136
  • The Wiener process is itself a martingale
  • The Poisson process becomes a martingale by subtracting its

mean, i.e. Mt := Nt − t λsds is an Ft − martingale → E {Nt − Ns | Fs} = E t

s

λudu | Fs

E ∞ CsdNs

  • = E

∞ Csλsds

  • ∀ Ft−predictable processes Ct.
slide-137
SLIDE 137
  • Self-financing portfolio (no consumption)

dVt Vt− =

N

  • j=1

πj

t

dSj

t

Sj

t−

=

N

  • j=1

πj

t M

  • i=1

(eaji − 1)dN i

t

πj

t : fraction of wealth invested in Sj at time t

(assume N

j=1 πj t(eaji − 1) ≥ −1 ; (i = 1, · · · , M))

→ VT = V0

M

  • i=1

exp   T log  1 +

N

  • j=1

πj

t(eaji − 1)

  dN i

t

  →        log VT = log V0 + M

i=1

T

0 log

  • 1 + N

j=1 πj t(eaji − 1)

  • dN i

t

V µ

T = V µ

M

i=1 exp

T

0 log

  • 1 + N

j=1 πj t(eaji − 1)

µ dN i

t

slide-138
SLIDE 138

Log-utility (Problem formulation) a) πt ∈ Ft (full information also of λt) sup

π E {log VT} = log V0

+ sup

π M

  • i=1

E    T log  1 +

N

  • j=1

πj

t(eaji − 1)

  dN i

t

   = log V0 + sup

π E

   T

M

  • i=1

log  1 +

N

  • j=1

πj

t(eaji − 1)

  λi

tdt

  

slide-139
SLIDE 139

b) πt ∈ FS

t ⊂ Ft

; ˆ λi

t := E{λi t | FS t } = E{λi(Xt) | FS t }

sup

π E {log VT} = log V0

+ sup

π M

  • i=1

E    T log  1 +

N

  • j=1

πj

t(eaji − 1)

  dN i

t

   = log V0 + sup

π E

   T

M

  • i=1

log  1 +

N

  • j=1

πj

t(eaji − 1)

  ˆ λi

tdt

  

slide-140
SLIDE 140

Log-utility (Solution) It suffices to perform, for each t ∈ [0, T] and each ω ∈ Ω, max

π M

  • i=1

log  1 +

N

  • j=1

πj

t(eaji − 1)

  λi

t

(resp. with ˆ λi

t)

  • M
  • i=1

λi

t(eaji − 1)

1 + N

ℓ=1 πℓ t(eaℓi − 1)

= 0 ; j = 1, · · · , N (resp. with ˆ λi

t)

→ CE-type property

slide-141
SLIDE 141

→ In the special case of M = 2 : a linear system of N equations in the N unknowns hj

t

(constraint N

j=1 πj t(eaji − 1) ≥ −1 ; (i = 1, · · · , M))

→ Same result with DP (generator of λt does not depend on h) → In the full information case, using convex duality, the maximization over the MMs leads to a linear system in the parameters indexing the MMs whenever M = N + 1.

slide-142
SLIDE 142

POWER UTILITY Recall first the measure transformation for jump processes (scalar case) Theorem : Let Nt be a Poisson process with (P, Ft)−intensity λt. Let ψt ≥ 0 be Ft−predictable s.t. t

0 ψsλsds < ∞ P−a.s. ∀t.

Let dLt = Lt−(ψt − 1)(dNt − λtdt) ; i.e. Lt = exp t

0(1 − ψs)λsds +

t

0 log(ψs) dNs

  • =

exp t

0(1 − ψs)λsds

Nt

n=1 ψTn

If EP{Lt} = 1, ∀t ≥ 0 then ∃ a measure Q ∼ P with dQ|Ft = LtdP|Ft s.t. Nt has (Q, Ft)−intensity λtψt.

slide-143
SLIDE 143

POWER UTILITY Formulation of the problem as risk-sensitive control problem a) πt ∈ Ft , predictable (full information also of λt) E {V µ

T } = V µ 0 E

M

i=1 exp

T

0 log

  • 1 + N

j=1 πj t(eaji − 1)

µ dN i

t

  • = V µ

0 E

M

i=1 exp

T

0 log

  • 1 + N

j=1 πj t(eaji − 1)

µ dN i

t

+ T

  • 1 −
  • 1 + N

j=1 πj t(eaji − 1)

µ λi

tdt

− T

  • 1 −
  • 1 + N

j=1 πj t(eaji − 1)

µ λi

tdt

slide-144
SLIDE 144

⇒ E {V µ

T }

= V µ

0 Eπ

exp T M

i=1

  • 1 + N

j=1 πj t(eaji − 1)

µ − 1

  • λi

tdt

  • → Under P h ∼ P the intensity of N i

t becomes

λi

t

 1 +

N

  • j=1

πj

t(eaij − 1)

 

µ

. The law of λt however remains the same. → Condition N

j=1 πj t(eaji−1) ≥ −1; (i = 1, · · · , M) is sufficient

for ∃ of the R.N.-derivative having mean equal to 1.

slide-145
SLIDE 145

b) πt ∈ FS

t ⊂ Ft , predictable (being aji known, FS t = FN t

for Nt = (N 1

t , · · · , N M t ))

→ Since πt ∈ FS

t the same previous derivation is valid with the

Ft−intensities λi

t replaced by the FS t −intensities

ˆ λi

t = E{λi t | FS t }.

⇒ E {V µ

T }

= V µ

0 Eπ

exp T M

i=1

  • 1 + N

j=1 πj t(eaji − 1)

µ − 1

  • ˆ

λi

tdt

  • and the P h−intensities are ˆ

λi

t

  • 1 + N

j=1 πj t(eaij − 1)

µ

slide-146
SLIDE 146

POWER UTILITY (Solution under full information) It suffices to perform, for each t and each ω ∈ Ω, max

h M

  • i=1

   1 +

N

  • j=1

πj

t(eaji − 1)

 

µ

− 1   λi

t

  • (for µ ∈ (0, 1))

M

  • i=1

λi

t(eaji − 1)

  • 1 +

N

  • ℓ=1

πℓ

t(eaℓi − 1)

1−µ = 0,

  • j = 1, · · · , N,

N

j=1 πj t(eaji − 1) ≥ −1

slide-147
SLIDE 147

Same result with DP Define the value function w(t, λ) := sup

π log E

  • exp

T

t

C(πs, λs)ds

  • | λt = λ
  • where

C(πt, λt) :=

M

  • i=1

   1 +

N

  • j=1

πj

t(eaji − 1)

 

µ

− 1   λi

t

and notice that, although the measure transformation P → P h changes the intensities of N j

t , it does not change the law of λt.

slide-148
SLIDE 148

The corresponding HJB equation is then given by     

∂ ∂tw(t, λ) + Ltw(t, λ) + supπ[C(π, λ)] = 0

w(T, λ) = 0 with Lt the generator corresponding to (λt).

slide-149
SLIDE 149

POWER UTILITY (Solution under incomplete information)

  • A. Transition to complete information.

Approach based on a “Zakai-type” equation ([Nagai/Peng])

  • Let λi

t = λi(Xt) with Xt a K−state Markov process with

intensity matrix Q. (Notice also that (λi(Xt) ≤ ¯ λ).

  • Change of measure P → ˆ

P such that d ˆ P d P |Ft = Lt with Lt =

M

  • i=1

exp t (λi(Xs) − 1)ds − t log λi(Xs−) dN i

s

  • → the ˆ

P−intensities become λi(Xt) ≡ 1 and, under ˆ P, Xt and N i

t are independent ∀i = 1, · · · , M.

slide-150
SLIDE 150

We may then write E{V µ

T } =

V µ

0 ˆ

E M

i=1 exp

T

  • log
  • 1 + N

j=1 πj t(eaij − 1)

µ + log λi(Xt−)

  • dN i

t −

T

0 (λi(Xt) − 1)dt

  • = V µ

0 ˆ

E

  • exp

M

i=1

T

0 log Γi(πt, Xt−) dN i t −

T

0 (λi(Xt) − 1)dt

having put Γi(πt, Xt) := λi(Xt)  1 +

N

  • j=1

πj

t(eaij − 1)

 

µ

≤ ¯ Γ.

slide-151
SLIDE 151
  • Motivated by the RHS for E{V µ

T } consider the process

Ht := exp M

  • i=1

t log Γi(πs, Xs−) dN i

s −

t (λi(Xs) − 1)ds for which ˆ E {Ht} ≤ eMT (¯

Γ+1) := ¯

q and put qt(k) = ˆ E

  • 1{Xt=k}Ht | FS

t

  • ≤ ¯

q , k = 1, · · · , K so that E{V µ

T } = V µ 0 ˆ

E{HT} = V µ

0 ˆ

E{ ˆ E{HT | FS

T }}

= V µ

K

  • k=1

ˆ E{ ˆ E{1{XT =k}HT | FS

T }} = V µ 0 ˆ

E K

  • k=1

qT(k)

  • (≤V µ

0 K ¯

q)

slide-152
SLIDE 152

Zakai equation Using Ito’s formula on Ht and properties of finite-state Markov chains (the law of Xt is the same under P and ˆ P) one obtains dqt(k) = (Q′qt)(k)dt + qt(k)

M

  • i=1

(1 − λi(k))dt + qt−(k)

M

  • i=1

(Γi(ht, k) − 1) dN i

t

→ One has that qt(k) K

j=1 qt(j)

= P{Xt = k | FS

t } so that qt(k) has

the interpretation of an unnormalized conditional probability. → qt(k) can be shown to be bounded.

slide-153
SLIDE 153
  • B. The corresponding complete information pb and its solution
  • Putting

qt = [qt(1), · · · , qt(K)], the complete information problem corresponding to the original portfolio optimization problem can be synthesized as (recall FS

t = FN t )

                               max

π

V µ

0 ˆ

E K

  • k=1

qT(k)

  • dqt =

 Q′ +

M

  • j=1

[I − diag(λj(k))]   qtdt +  

m

  • j=1

diag(Γj(πt, k)) − I   qt− dN j

t

slide-154
SLIDE 154

→ The dynamics of qt are under ˆ P for which λj(Xt) ≡ 1, ∀j. → It is a problem of the type of piecewise deterministic control problems and can thus be approached by one of the general techniques for such problems. For our particular situation one can however adapt an approach from [Kirch-R.,2004] leading to an algorithm of the type of value iteration for infinite-horizon MDP’s.

slide-155
SLIDE 155
  • At the generic jump time τ j of N j

t one has

qτj = diag(Γj(hτj, k)) q(τj)−

  • Between two generic jump times of the multivariate jump process

Nt = (N 1

t , · · · , N M t ), i.e.

for t ∈ [τn, τn+1) one has the deterministic evolution dqt =  Q′ + MI − diag  

M

  • j=1

λj(k)     qtdt := Λ qtdt so that, for t ∈ [τn, τn+1), qt = exp[Λ (t − τ)] · qτ with Λ as defined above.

slide-156
SLIDE 156
  • Recalling that under ˆ

P one has λj(Xt) ≡ 1, the following holds ˆ E

  • f(qτn+1) | qτn = q
  • =

T

τn

 

M

  • j=1

f

  • diag(Γj(πt, k))eΛ(t−τn)q

 e−M(t−τn)dt +f

  • eΛ(T −τn)q
  • e−M(T −τn)

where, as before, Λ := Q′ + MI − diag  

M

  • j=1

λj(k)  

slide-157
SLIDE 157
  • Since qt < ¯

q 1, consider as state space E =

  • (q, t) | q ∈ RK, 0 < qt < ¯

q 1, t ∈ [0, T]

  • For J : E → R+ define the operator Ψ mapping J to ΨJ :

E → R+ by (ΨJ)(q, t) = T −t e−Ms max

h

 

M

  • j=1

J

  • diag(Γj(π, k))eΛsq, t + s

 ds + e−M(T −t)

K

  • k=1
  • eΛ(T −t)q

k → The last term on the right is motivated by the objective function.

slide-158
SLIDE 158

→ The operator Ψ is such that Ψ : C(E) − → C(E) and it is a contraction operator with contraction constant 1 − e−MT.

slide-159
SLIDE 159
  • Given n ∈ N, let

J0 = 0 and, for j ≤ n, Jj = ΨJj−1 and let (πn

t )t∈[0,T ] be the strategy induced by computing

Jn(q0, 0).

  • Define

J∗,n(q, t) = max

π

ˆ E K

  • k=1

qT(k) , τn > T | qt = q

  • .

→ It is the optimal value for the problem obtained from the

  • riginal one by replacing Ω with Ω ∩ {τn > T}.

→ One can show that Jn(q, t) = J∗,n(q, t) , ∀(q, t) ∈ E.

slide-160
SLIDE 160
  • For J∗(q, t) = max

π

ˆ E K

  • k=1

qT(k) | qt = q

  • ne has

Jn(q, t) ≤ J∗(q, t) ≤ Jn(q, t) + K¯ q ˆ P{τn ≤ T} with ¯ q the bound on qt(k).

slide-161
SLIDE 161

Main theorem i) J∗ = J∗(q, 0) = max

π

ˆ E K

  • k=1

qT(k) | q0 = q

  • is the unique

fixed point of the operator Ψ, i.e. J∗ = ΨJ∗ and ||Jn − J∗|| ≤ eMT 1 − e−MTn ||J1|| ii) The optimal π∗ is π∗

t = argmaxπ

 

M

  • j=1

J∗ diag(Γj(π, k))eΛ(t−τn)q, t

 with π admissible and Λ =

  • Q′ + MI − diag

M

j=1 λj(k)

slide-162
SLIDE 162

iii) Let ˜ Jn = ˜ Jn(q, 0) be the value function of the original problem corresponding to the strategy (πn

t ). Then

Jn ≤ ˜ Jn ≤ Jn + K¯ q ˆ P{τn ≤ T} → Since lim

n→∞

ˆ P{τn ≤ T} = 0, it follows from the Theorem that lim

n→∞

˜ Jn = J∗, i.e. the strategy (πn

t ), induced by computing the

n−th iterate Jn(q, 0) of Ψ, is, for n sufficiently large, nearly

  • ptimal in the original problem.

→ To compute Jn and the corresponding strategy (πn

t ), one thus

follows an approach of the type of value iteration for infinite- horizon MDP’s → Quantization (with convergence) to actually compute (πn

t ).

slide-163
SLIDE 163
  • Nagai H. and S. Peng (2002), “Risk-sensitive dynamic portfolio
  • ptimization with partial information on infinite time horizon”,

Annals of Applied Probability, 12, pp. 173–195.

  • Kirch M. and W.J.Runggaldier (2004), Efficient hedging when

asset prices follow e geometric Poisson process with unknown intensities, SIAM J. Control and Optimiz.,43, pp. 1174–1195.

  • Callegaro G.,

Di Masi G.B. and W.J.Runggaldier (2006), “Portfolio

  • ptimization

in discontinuous markets under incomplete information”, Asia Pacific Financial Markets, 13/4 (2006), pp. 373–394.