optimization problems in finance under full and partial
play

Optimization problems in finance under full and partial information - PowerPoint PPT Presentation

Optimization problems in finance under full and partial information Wolfgang Runggaldier University of Padova, Italy www.math.unipd.it/runggaldier Tutorial for the Special Semester on Stochastics with Emphasis on Finance, Linz, September 2008


  1. Standard classical optimization problem (maximization of expected utility from consumption and terminal wealth) Neglecting transaction costs but considering as additional control variable c t that represents the rate of consumption at time t : dV π,c = V π,c  [ r t dt + π ′ t ( A t − r t 1) dt + π ′ t Σ t dw t ] − c t dt t t     �� T � U 1 ( t, c t ) dt + U 2 ( V π,c max )  π,c E V 0  T   0 with U 1 ( · ) and U 2 ( · ) utility functions from consumption and terminal wealth respectively that satisfy the usual assumptions.

  2. Insurance context • The fundamental quantity, corresponding to V t , is here the risk process that, without investment or reinsurance, is given by N t � R t = s + ct − X i := s + ct − D t i =1 with X i : claim sizes ; c : premium intensity → Additional features : ⋆ Investment ⋆ Reinsurance ⋆ Other

  3. Investment • One can invest in one or more risky assets. Assume one such asset with price dynamics dZ t = aZ t dt + bZ t dw t and let A t be the amount invested in this asset (the rest in the bank account with interest rate r ) . • With θ t = A t Z t denoting then the number of shares held in the risky asset, one obtains for the risk process dR θ t = c dt − dD t + θ t dZ t + r ( R θ t − θ t Z t ) dt

  4. Reinsurance • There are various forms of reinsurance, here we mention excess- of-loss reinsurance : the insurer pays min( X, b ) and the reinsurer pays ( X − b ) + . For this the insurer pays the reinsurance premium h ( b ) to the insurer. • One then obtains for the risk process � t N t � R b t = s + ct − h ( b s ) ds − min { b T i , X i } 0 i =1

  5. Objective • A typical objective is the minimization of the ruin probability P { R t < 0 for some t | R 0 = s } . • One may consider other objective functions where π t denotes a generic control at time t (may be θ t or b t ), i.e. �� τ � U ( R π t , π t ) dt + U ( R π max τ , π τ ) | R 0 = s E π 0 with τ := inf { t ≥ 0 | R π t < 0 } . → Is of the previous standard form (with a random horizon).

  6. Hedging problem • Given an (underlying) price process S t and a future maturity T , let H T ∈ F S (contingent claim) T → It represents a liability depending on the future evolution of the underlying S . This implies some risk, and the purpose is to hedge this risk by investing in a self financing portfolio.

  7. � t � N � • Let V φ � φ 0 φ i s dS i t = V 0 + s dB s + be the value in t of a s 0 i =1 self financing portfolio. → Determine, if possible, V 0 and ¯ φ t s.t. ¯ φ T = H T a.s. V (equivalently V π T = H T a.s.); i.e. such that one has perfect duplication/replication of the claim. → If this is possible for any H T , then the market is said to be complete .

  8. → If the market is not complete, or the initially available capital is not sufficient for perfect replication, one has to choose a hedging criterion . Two possible criteria are : • Minimization of shortfall risk (an asymmetric downside-type criterion) ( H T − V π T ) + �� � � L → min E S 0 ,V 0 • Minimization of quadratic loss (a symmetric risk criterion) ( H T − V π T ) 2 � � → E S 0 ,V 0 min

  9. General problem formulation (over a finite horizon and including a benchmark)  dS t = ( diag S t ) A t dt + ( diag S t ) Σ t dw t        dI t = I t [ α t dt + σ t dw t + ρ t dv t ]       dV π,c = V π,c [ r t dt + π t ( A t − r t 1) dt + π t Σ t dw t ] − c t dt t t      �� T  �   L 1 ( I t , V π,c , π t , c t ) dt + L 2 ( S T , I T , V π  min T ) π,c E S 0 ,I 0 ,V 0  t    T − δ

  10. • L 1 ( · ) and L 2 ( · ) are loss functions that may be of the following form t ) + − c t L 1 ( I t , V π,c , π t , c t ) = ( g ( I t ) − η − V π t T ) + L 2 ( S T , I T , V π T ) = ( H ( I T , S T ) − V π for some functions g ( · ) and H ( · ) and for an η > 0 . → Includes hedging of a contingent claim H ( S T ) .

  11. One may consider different variants of the basic setup corresponding to possible variants of a general stochastic control problem as e.g. : • The basic dynamics of the price vector S t may be generalized as described previously. • If some of the assets are subject to default, the fixed horizon may be replaced by a stopping time τ . This stopping time may also become a control variable for the hedging problem of American-type options. • The horizon may become infinite for problems of life-time consumption or when the objective is to maximize the growth rate. Maximizing the risk sensitized growth rate leads to a risk sensitive control problem .

  12. • In the presence of transaction costs a convenient way to define a trading strategy is by the total number of shares of the various assets that are purchased or sold up to the current time t , i.e. t respectively. Letting λ i and µ i denote the cost rate L i t and M i for buying respectively selling asset i , the self financing condition then leads to N � dV t = φ 0 φ i t dS i t − S i λ i dL i t + µ i dM i � � �� t dB t + t t i =1 with φ i t = L i t − M i t . In this way one obtains a singular stochastic control problem . • The inclusion also of fixed transaction costs may lead to impulsive control problems. This kind of problems may also arise when a central bank intervenes to control the exchange rate. .

  13. Optimization, arbitrage, and martingale measures • Arbitrage opportunity (OA) : existence of a self financing portfolio φ s.t. V φ V φ P { V φ N ≥ 0 , N > 0 } > 0 0 = 0 , • Consider for simplicity maximization of terminal utility � � U ( V φ max N ) E V 0 = v { φ self fin } → If ∃ optimal solution φ ∗ of this problem, then there cannot be (OA)

  14. Proof : • Given φ ∗ , let φ be an arbitrage portfolio and put N = V φ ∗ ¯ ¯ φ = φ ∗ + φ ¯ φ N + V φ φ ⇒ N ( V 0 = v ) V • From the assumption on φ , V φ N ≥ 0 , P { V φ N > 0 } > 0 N ≥ V φ ∗ N > V φ ∗ ¯ ¯ φ φ ⇒ V with P { V N } > 0 N • Since U ( · ) is monotonically increasing N ) } > E { U ( V φ ∗ ¯ φ ⇒ E { U ( V N ) } contradicting the assumed optimality of φ ∗ .

  15. • According to the 1st FTAP → ′′ “ − ∃ MM Q AOA i.e. there exists a numeraire N n (reference asset/portfolio) s.t. � S n � = S m E Q | F m , m < n N n N m φ ∗ → For an at most denumerable Ω , if is solution of max φ E { U ( V φ N ) } then, for the numeraire N n = B n , B N U ′ ( V φ ∗ N ) Q ( ω ) = P ( ω ) � � B N U ′ ( V φ ∗ N ) E

  16. Changing numeraire − → change MM Question : is there a numeraire s.t. for the corresponding Q it holds Q = P ? → A portfolio that, if used as numeraire, has the above property is called numeraire portfolio

  17. • Log-optimal portfolio N } = E V 0 = v { log V φ ∗ φ ∗ s.t. max E V 0 = v { log V φ N } φ → a log-optimal portfolio is also growth-optimal in the sense that it maximizes the “growth rate” Theorem : A log (growth) optimal portfolio is a numeraire portfolio.

  18. Proof : (for Ω denumerable) • It will be shown later that the log-optimal portfolio value is � � dQ � � n = V φ ∗ V ∗ = vB n L − 1 dP | F n n ; L n = E , Q MM for num. B n • Let Q ∗ be the MM for the numeraire V ∗ N , then N = dQ ∗ dQ = V ∗ N B 0 L ∗ ( B 0 = 1 , V 0 = v ) V ∗ 0 B N Q V ∗ B N U ′ ( V ∗ V ∗ N ) → Q ∗ − = vB N = P N N E { B N U ′ ( V ∗ N ) } vB N B N ( V ∗ N ) − 1 V ∗ N = P = P E { B N v − 1 B − 1 N L N } vB N

  19. Solution methodologies • Like for general stochastic control problems, also for those arising in finance a natural solution approach is based on Dynamic Programming (DP) . • An alternative method, developed mainly in connection with financial applications, is the so-called martingale method (MM) .

  20. Dynamic Programming (discrete time) • Recalling V n +1 = G n ( V n , π n , c n , ξ n +1 ) ; ξ n i.i.d. → if π n = π ( V n ) , c n = c ( V n ) are Markov controls then V n is Markov • Objective : � N � � max U ( V n , π n , c n ) ( π 0 ,c 0 ) , ··· , ( π N ,c N ) E V 0 n =0

  21. Dynamic Programming Principle (DP) • If a process is optimal over an entire sequence of periods, then it has to be optimal over each single period. • Allows to obtain an optimal control sequence by a sequence of optimizations over the individual controls.

  22. Application of the DP principle Using adaptedness of ( π n , c n ) and Markovianity of V n (for illustration the case of N = 2 and with only π n as controls) π 0 ,π 1 ,π 2 E { U ( V 0 , π 0 ) + U ( V 1 , π 1 ) + U ( V 2 , π 2 ) } = (DP) max = max π 0 ,π 1 E { U ( V 0 , π 0 ) + U ( V 1 , π 1 ) + max π 2 U ( V 2 , π 2 ) } = (Markov) = max π 0 ,π 1 E { U ( V 0 ,π 0 )+ [ U ( V 1 ,π 1 )+ E { max π 2 U ( V 2 ,π 2 ) | ( V 1 ,π 1 ) } ] } = (DP+M) = max π 0 E { U ( V 0 ,π 0 )+ E { max π 1 U ( V 1 ,π 1 )+ E { max π 2 U ( V 2 ,π 2 ) | ( V 1 ,π 1 ) }} | ( V 0 ,π 0 ) }

  23. Implementation of the DP principle • Let (optimal cost-to-go) � N � � U ( V m , π m ) | V n = v Φ n ( v ) := max π n , ··· ,π N E m = n → the DP principle then leads to the backwards recursions (DP- algorithm)  Φ N ( v ) = max π N U ( v, π N )    π n [ U ( v, π n ) + E { Φ n +1 ( G ( V n , π n , ξ n +1 )) | V n = v } ]  Φ n ( v ) = max   → leads to a sequence of individual maximizations ; one obtains automatically Markov controls ( π n as function only of V n = v )

  24. • The DP algorithm can be used for numerical calculations if ξ n is finite-valued • It can however also be used to obtain some explicit expressions as will be illustrated for the following example (scalar S and, for simplicity, only terminal utility and c n = r n = 0 )  G n ( V n , π n , ξ n +1 ) = V n [1 + π n ( ξ n +1 − 1)]   π 0 , ··· ,π N E { U ( V N ) } max  

  25. → If U ( v ) = log v (log-utility) and ξ n is binomial ( ξ n ∈ { u, d } ), then Φ n ( v ) = log v + k n with � � � � �� p 1 − p k n = ( N − n ) + (1 − p ) log p log q 1 − q q = 1 − d u − d ( d < 1 < u ) and p − q π ∗ n = ( u − d ) q (1 − q ) (investing a constant ratio)

  26. Proof (by induction) • True for n = N • Assume true for n + 1 ,then E { log v + log(1 + π ( ξ n +1 − 1)) + k n +1 } Φ n ( v ) = max where π E { log(1+ π ( ξ n +1 − 1)) } = p log(1+ π ( n − 1))+(1 − p ) log(1+ π ( d − 1)) Imposing 1 + π ( u − 1)+(1 − p )( d − 1) p ( u − 1) ∂ ∂πE { log(1+ π ( ξ n +1 − 1)) } = 1 + π ( d − 1) = 0 leads to the required π ∗ ; replacing the latter in the previous expression allows then to conclude.

  27. • A constant investment fraction results only for log-utility • Taking U ( v ) = 1 − e − v one has, in the binomial case,  Φ n ( v ) = 1 − k n e − v    v ( u − d ) log p (1 − q ) 1 π ∗ n =    q (1 − p )

  28. Dynamic Programming (continuous time) Heuristic derivation of the HJB equation (from discrete to continuous time)  B t ≡ 1 = S t [ a dt + σ dw t ] ; dS t   = φ t dS t − c t dt dV t   → with Z t := log S t and π t := φ t S t V t � �  a − σ 2 = dt + σ dw t dZ t  2   ( V t aπ t − c t ) dt + V t σπ t dw t dV t = 

  29. → Putting Y t := [ Z t , V t ] ′ ; Π t := [ π t , c t ] one obtains a control problem of the following general form  dY t = A t ( Y t , Π t ) dt + B t ( Y t , Π t ) dw t ; t ∈ [0 , T ]     �� T � E Π sup U 1 ( t, Y t , Π t ) dt + U 2 ( Y T , Π T )   Y 0   Π 0 where Π t ∈ F Y t .

  30. • Apply an Euler-type discretization (step ∆ )  = Y t + A t ( Y t , Π t )∆ + B t ( Y t , Π t )∆ w t Y t +∆    := G t ( Y t , Π t , ∆ w t )        T/ ∆     � E Π  sup  ∆ U t ( Y t , Π t )  Y 0   Π   t =0  and recall that Φ t ( y ) = sup [∆ U t ( y, Π) + E { Φ t +1 ( G t ( y, Π , ∆ w t )) | Y t = y } ] Π

  31. Via a Taylor expansion and taking into account that E (∆ w t ) 2 ≈ ∆ [∆ U t ( y, Π) + E { Φ t +1 − Φ t ( y ) | Y t = y } ] sup Π � = ∂ ∆ U t ( y, Π) + ∆ A t ( y, Π) ∂ ∂t Φ t ( y )∆ + sup Π ∂y Φ t ( y ) t ( y, Π) ∂ 2 � + ∆ 2 B 2 ∂y 2 Φ t ( y ) + o (∆) = 0

  32. → Dividing by ∆ and letting ∆ ↓ 0 ,  � ∂ A t ( y, Π) ∂ ∂t Φ t ( y ) + sup Π ∂y Φ t ( y )         � t ( y, Π) ∂ 2 + 1 2 B 2 ∂y 2 Φ t ( y ) + U t ( y, Π) = 0       Φ T ( y ) = sup Π U T ( y, Π)  

  33. Standard heuristic derivation ( based on the DP principle ) �� t +∆ U s ( Y Π Φ t ( y ) = sup s , Π s ) ds E Π s , s ∈ [ t,t +∆] t +Φ t +∆ ( Y Π � t +∆ ) | Y t = y and then proceed analogously as before.

  34. Solution procedure i) Solve the maximization over Π depending on the yet unknown Φ t ( y ) . ii) Insert the maximizing value Π ∗ ( t, y ) and solve the resulting PDE. → A “verification theorem” guarantees, under sufficient regularity (classical solution), the optimality of the resulting Π ∗ ( t, y ) and Φ t ( y ) . → In the absence of sufficient regularity : viscosity solution. → Explicit analytical solutions only in particular cases (e.g. linear- quadratic Gaussian) .

  35. Standard classical optimization problem (maximization of expected utility from consumption and terminal wealth) Neglecting transaction costs but considering as additional control variable c t that represents the rate of consumption at time t : dV π,c = V π,c  [ r t dt + π ′ t ( A t − r t 1) dt + π ′ t Σ t dw t ] − c t dt t t     �� T � U 1 ( t, c t ) dt + U 2 ( V π,c max )  π,c E V 0  T   0 with U 1 ( · ) and U 2 ( · ) utility functions from consumption and terminal wealth respectively that satisfy the usual assumptions.

  36. • Put, for t ∈ [0 , T ] , �� T � U 1 ( s, c s ) ds + U 2 ( V π,c J π,c ( t, v ) := E π,c ) | V t = v T t and let (value function) π,c J π,c ( t, v ) Φ( t, v ) := sup

  37. HJB equation  � ∂ Φ [ vr t − c + vπ ′ ( A t − r t 1)] ∂ Φ  ∂t ( t, v ) + sup ∂v ( t, v )    π,c   2 v 2 || π ′ Σ t || 2 ∂ 2 Φ  +1 �  ∂v 2 ( t, v ) + U 1 ( t, c ) = 0       Φ( T, v ) = U 2 ( v ) , Φ( t, 0) = 0    � ∂ Φ c ∗ � ( I 1 ( · ) inverse of U ′ 1 ( · )) t = I 1 ∂v ( t, v ) , t    → � − 1 t ] − 1 [ A t − r t ¯ � v ∂ 2 Φ 1] ∂ Φ π ∗ t = − [Σ t Σ ′  ∂v ( t, v ) ∂v 2 ( t, v )  

  38. • After substituting ( π ∗ , c ∗ ) for ( π, c ) one is left with a PDE : explicit solutions can be obtained only in specific cases (mainly in Insurance applications); regularity results are also required. • Qualitative results are possible as e.g. the “Mutual fund theorem” : the optimal portfolio consists of an allocation between two fixed mutual funds. • The invertibility of Σ t Σ ′ t is equivalent to completeness of the market (recall that, if an optimal solution exists, there cannot be arbitrage but the market may be incomplete).

  39. Approximations • If analytical solutions are not possible : approximations (here an outline of a methodology based on work by H.Kushner) . • Use the HJB equation only as an indication for finding an appropriate time and space discretization ( V δ t ) of ( V t ) such that δ → 0 ( V δ ⇒ V t t ) in distribution with ( V δ t ) a continuous time interpolation of a discrete time and finite valued process.

  40. • Letting J π,c ( t, v ) be the corresponding expected remaining δ cumulative utility at time t , assume furthermore that | J π,c (0 , v ) − J π,c (0 , v ) |≤ G δ δ with G δ not depending on ( π, c ) and such that lim δ → 0 G δ = 0

  41. Then i) | sup π,c J π,c (0 , v ) − sup π,c J π,c (0 , v ) |≤ G δ δ ii) Let ( π δ , c δ ) be the optimal strategy of the approximating problem and let it denote also its interpolation in order to apply it to the original problem. Then π,c J π,c (0 , v ) − J π δ ,c δ (0 , v ) |≤ 2 G δ | sup

  42. General underlying approximation philosophy Approximate the original problem by a sequence of problems such that the last one is explicitly solvable and show that the corresponding solution (suitably extended to be applicable in the original problem) is nearly optimal in the original problem . → The control computed from the approximating problem may even be simpler to apply in practice.

  43. Martingale method (discrete time) Preliminaries • Q is a martingale measure if, for a given numeraire N n , � S n � = S m E Q | F m , m < n N n N m • With ˜ S n := N − 1 n S n E Q { ˜ S n | F m } = ˜ m ˜ E Q { ∆ n ⇔ S m | F m } = 0 S m → Usually N n = B n (locally riskless asset)

  44. • The self financing condition is K K � � φ 0 φ i n S i n = φ 0 φ i n +1 S i n B n + n +1 B n + n + c n i =1 i =1 which, with N n = B n becomes K K � n ˜ � n +1 ˜ φ 0 φ i S i n = φ 0 φ i S i n + n +1 + n + ˜ c n i =1 i =1 K K ˜ � n +1 ˜ n +1 = ˜ � n +1 ∆ ˜ V n +1 = φ 0 φ i S i φ i S i → n +1 + V n + n − ˜ c n i =1 i =1 E Q � � ˜ = ˜ ⇒ (for ˜ c n = 0 ) V n +1 | F n V n i.e. the discounted values of a self financing portfolio are ( Q, F n ) − martingales .

  45. • Recall the hedging problem : Given H N ∈ F S N , determine V 0 = v and a self financing strategy φ (no consumption) s.t. V φ V φ ˜ N = ˜ N = H N a.s. H N a.s. • Since ˜ V φ n is a Q − martingale for any martingale measure Q , V 0 = ˜ V 0 = E Q { ˜ V φ N } = E Q { ˜ H N } and this determines the initial wealth V 0 = v .

  46. Determining the hedging strategy corresponds to a martingale representation problem . M n := E Q { ˜ ˜ H N | F n } which is a ( Q, F n ) − martingale i) Define ( E Q { ˜ M n |F m } = E Q { E Q { ˜ H N |F n }|F m } = E Q { ˜ H N |F m } = ˜ M m ) ii) Determine ¯ φ n s.t., with V 0 = v and for n K ¯ ˜ � � ¯ m +1 ∆ ˜ φ φ i S i n = V 0 + V m m =0 i =1 one has ˜ M n = ˜ V n (representing the martingale ˜ M n in the form of ˜ V n ) → ¯ φ n is then the hedging strategy

  47. Martingale method (discrete time) Methodology (only terminal utility; no consumption) 1. Given V 0 = v , determine the set of reachable terminal wealths V N , i.e. � � V | V = V φ V v := N for φ self financing and V 0 = v 2. Determine the optimal terminal wealth V ∗ N E { U ( V ∗ N ) } ≥ E { U ( V N ) } ∀ V N ∈ V v 3. Determine a self financing strategy φ ∗ s.t. V φ ∗ N = V ∗ N (corresponds to hedging the “claim” H N = V ∗ N )

  48. • Solving i) : V v is the set of all V N s.t. E Q { ˜ V N } = v ∀ MM Q → If the set of all MM’s is a convex polyhedron with a finite number of “vertices” Q j ( j = 1 , · · · , J ) , then the condition becomes E Q j { ˜ V N } = v ; j = 1 , · · · , J

  49. • Solving ii) i.e. V ∈V v E { U ( V ) } = E { U ( V ) } max max { V | E Qj { ˜ V } = v ; j =1 , · ,J } → Use Lagrange multiplier method with L j := dQ j E Q j { ˜ V } = E Q { ˜ V L j } so that and one has dP   J   � λ j B − 1 N V L j  U ( V ) − max E V  j =1 J � U ′ ( V ) = λ j B − 1 N L j → j =1

  50. • Putting I ( · ) = ( U ′ ( · )) − 1 it follows that   J � V ∗ λ j B − 1 N L j N = I   j =1 with λ j satisfying the system of budget equations     J   � B − 1  B − 1 λ j B − 1 N V ∗ N L j � N L j I N L j � v = E = E    j =1 for j = 1 , · · · , J .

  51. Example : U ( v ) = log v → I ( y ) = y − 1 → In a complete market (a single MM Q ) the budget equation becomes v = λ − 1 λ = v − 1 ↔ N = v B N with L = dQ and V ∗ L dP

  52. • In a binomial market model with ν n denoting the total random number of up-movements � ν n � 1 − q � N − ν n � q L = dQ dP ( ν n ) = 1 − p p → (for simplicity r n = 0 i.e. B n = 1 ) � ν n � 1 − p � N − ν n � p V ∗ N = v 1 − q q and (recall E { ν n } = Np ) � � p � � 1 − p �� E { U ( V ∗ N ) } = log v + N + (1 − p ) log p log 1 − q q → compare with DP; similarly for the strategies.

  53. Martingale method (discrete time) Methodology (terminal utility with consumption) Definition: An investment/consumption strategy ( φ, c ) is admissible if c N ≤ V N .

  54. • Recalling that, allowing also for consumption, the self financing condition reads as follows n − 1 K n − 1 ˜ � � m +1 ∆ ˜ � φ i S i V n = V 0 + m − ˜ c m m =0 i =1 m =0 we give also the following Definition: An investment/consumption strategy ( φ, c ) is attainable from the initial endowment V 0 = v if (letting the set of MM’s be a convex polyhedron with J vertices) , v = E Q j � � c N − 1 + ˜ c 0 + · · · + ˜ ∀ j = 1 , · · · , J ˜ V N ,

  55. Procedure i) Determine the set of attainable consumption processes and terminal wealths . ii) Determine the optimal attainable consumption and terminal wealth. iii) Determine an investment strategy that allows to consume according to the optimal consumption process. Solving i) : see definition of attainability.

  56. � N � � Solving ii) : max U c ( c n ) + U p ( V N − c N ) c,V n E n =0 n E { L j | F n } with the following budget equations where N j n := B − 1 E Q j �� N − 1 � � L j �� N − 1 �� c n + ˜ c n + ˜ = n =0 ˜ = E n =0 ˜ v V N V N �� N − 1 �� n c n L j | F n N V N L j | F N B − 1 B − 1 � � � = + E E n =0 E �� N − 1 � n + V N N j n =0 c n N j ; ∀ j = 1 , · · · , J = E N Having U c ( c ) = −∞ for c < 0 ; U p ( v ) = −∞ for v < 0 guarantees c n ≥ 0 , c N ≤ V N → admissibility .

  57. Lagrange multiplier technique  � N J � N − 1   � � � n + V N N j c n N j U c ( c n ) + U p ( V N − c N ) − max E λ j N   n =0 j =1 n =0 � J  U ′ j =1 λ j N j c ( c n ) = n ; n = 0 , · · · , N − 1       U ′ U ′ ⇒ c ( c N ) = p ( V n − c n )     � J  U ′ j =1 λ j N j p ( V n − c n ) =  n

  58. �� J �  j =1 λ j N j n = 0 , · · · , N c n = I c ; n    ⇒ �� J � �� J � j =1 λ j N j j =1 λ j N j  = + I c V N I p   N N with the budget equation       N J J   � �  + N j � λ j N j N j λ j N j v = E n I c N I p  n   N   n =0 j =1 j =1 for j = 1 , · · · , J .

  59. Martingale approach (continuous time) Preliminaries: determining the hedging strategy in a complete market (martingale representation) ( P ) dS t = ( diag S t ) A t dt + ( diag S t ) Σ t dw t , Σ t invertible → Want a measure Q ∼ P s.t. dS t = ( diag S t ) r t 1 dt + ( diag S t ) Σ t dw Q ( Q ) t � � ˜ d ˜ diag ˜ Σ t dw Q t := B − 1 S i t S i → satisfy S t = S t t t ˜ i.e. S t is a ( Q, F t ) − martingale ( Q is a Martingale measure (MM) ).

  60. → The comparison of the two representations implies dw Q θ t := Σ − 1 t ( A t − r t 1) t = dw t + θ t dt where i.e. Q is obtained from P by a Girsanov-type measure transformation implying a translation of the Wiener process w t by θ t . → For the given model a MM exists and is unique. � T � T � � t dw t − 1 L = dQ θ ′ θ ′ dP = exp − t θ t dt 2 0 0

  61. • From the self financing condition N � dV t = φ 0 φ i t dS i t dB t + t i =1 putting ˜ V t := B − 1 t V t , one has d ˜ V t = φ t d ˜ S t = φ t ( diag ˜ S t )Σ t dw Q t i.e., under Q , also ˜ V t is a martingale with � t V t = ˜ ˜ φ s ( diag ˜ S s )Σ s dw Q V 0 + s 0 and the problem is to possibly find ˜ V 0 and φ t s.t. ˜ V T = B − 1 T H T a.s.

  62. ( Q, F t ) − martingale (assume H T = • Consider the following H ( S T ) and put ˜ H T := B − 1 T H T ) M t := E Q { ˜ ˜ H T | F t } = E Q { ˜ H T | ˜ S t } := F ( t, ˜ S t ) ⇒ The problem is solved if we find ˜ V 0 and φ t s.t. ˜ V t = ˜ M t a.s. (need martingale representation for ˜ M t ). • By Ito’s rule d ˜ dF ( t, ˜ = S t ) M t � � 2 tr { ( diag ˜ t ( diag ˜ F t ( · ) + 1 S t )Σ t Σ ′ S t ) } F ss ( · ) = dt F s ( · )( diag ˜ S t )Σ t dw Q + t

  63. • Since ˜ M t is a martingale,  F t ( t, s ) + 1 2 tr { ( diag s )Σ t Σ ′ t ( diag s ) } F ss ( t, s ) = 0   F ( T, s ) = ˜ H ( s )   and one has the explicit martingale representation � t M t = ˜ ˜ F s ( t, ˜ S t )( diag ˜ S t )Σ t dw Q M 0 + t 0 The problem is thus solved by choosing  V 0 = ˜ ˜ M 0 = E Q { ˜ H T }   φ t = F s ( t, ˜ S t )  

  64. Basic idea of the martingale method Two steps : i) Determine the optimal value of the cost functional that, for a given initial capital V 0 , can be reached by a self financing portfolio (static optimization under a constraint) ii) Determine the control/strategy that achieves this optimal value. → For step ii) use martingale representation → To solve the (static) problem in point i) more possibilities, e.g.: • Method based on Lagrange multipliers; • method based on convex duality.

  65. Lagrange multiplier method V | E Q { B − 1 � � V ∈V v E { U ( V ) } V v = T V } = v max with leads then to E { U ( V ) } − λE Q { B − 1 U ( V ) − λLB − 1 � � � � max T V } = max E T V V V

  66. Example: U ( v ) = log v ; single MM Q , B t = 1 In this case I ( y ) = y − 1 λ = v − 1 → � T � T � � t dw t − 1 T = v L with L = dQ θ ′ θ ′ → V ∗ − dP = exp t θ t dt 2 0 0 where θ t = Σ − 1 t ( A t − r 1) , and therefore �� T � T � t dw t + 1 E { log V ∗ 0 θ ′ 0 θ ′ T } = log v + E t θ t dt 2 � T log v + 1 ( A ′ t − 1 r t ) Σ − 2 ( A t − r t 1) dt = t 2 0

  67. • The optimal investment strategy is now determined as the hedging strategy for the claim H T = V ∗ T = vL − 1 . → Since B t ≡ 1 , all quantities are automatically already discounted and so, under the unique MM Q one has dS t = ( diagS t ) Σ t dw Q t Σ t dw Q dV t = V t π ′ ; t t • Determine now π t such that the Q − martingale V t matches the following Q − martingale M t : M t := E Q { vL − 1 | F t } → need a representation of L − 1 under Q

  68. � T � T � � • From L = dQ t dw t − 1 0 θ ′ 0 θ ′ − dP = exp and using the t θ t dt 2 fact that dw Q t = dw t + θ t dt , one has �� T � T � L − 1 = dP t dw t + 1 0 θ ′ 0 θ ′ = exp t θ t dt dQ 2 �� T � T � t dw Q 0 θ ′ t − 1 0 θ ′ = exp t θ t dt 2  �� t � t := E Q { L − 1 | F t } = exp � L − 1 0 θ ′ s dw Q s − 1 0 θ ′ s θ s ds  t  2  → t dw Q  dL − 1 = L − 1 t θ ′   t t

  69. • One can now write M t := E Q { vL − 1 | F t } = vL − 1 and t t dw Q t dw Q dM t = vL − 1 t θ ′ t = M t θ ′ t  t Σ t dw Q V t π ′ = dV t t   From one then has t dw Q M t θ ′  = dM t  t π t = Σ − 1 π ′ t Σ t = θ ′ → → t θ t t • Recalling that θ t := Σ − 1 t ( A t − r t 1) = Σ − 1 t A t , ( B t ≡ 1 ⇒ r t = 0) , one finally has that π t = Σ − 2 t A t which is constant if A t and Σ t do not depend on time.

  70. Discussion of DP vs MM • DP is based (in continuous time) on HJB : first one determines the optimal control as a function of the (yet unknown) optimal value; substituting this back into HJB one obtains a nonlinear PDE that leads to the optimal value. In MM the opposite : first one determines the optimal value without reference to the control and then the optimal strategy is determined as a strategy that leads to this optimal value.

  71. • DP is a fully dynamic procedure by which, provided that the state process is Markovian and the cost is additive over time, the optimization over time is reduced to a parameter optimization. MM is a more static procedure and, in fact, it does not require Markovianity. In general, MM has however a narrower field of applicability. • The dynamic structure of DP makes it better suited to deal with problems with partial/incomplete information . • Explicit solutions are not easy to obtain by either of the methods. For DP there exist approximation methods which is not so much the case with MM.

  72. Incomplete information/model uncertainty To obtain an optimal solution for a financial problem one needs a model. The model may not be perfectly known; on the other hand, the solution may be rather sensitive to the model. → Problem of model uncertainty ( model risk ) → In what follows three possible approaches for hedging and utility maximization under model uncertainty.

  73. Min-max approach It is a natural approach, but rather conservative in that it protects against the worst case scenario. • Letting P be a family of possible “real world probability measures” ( ambiguity set ), consider the following criterion related to the shortfall risk minimization for the hedging problem E P ( H T − V π T ) + �� � � L inf sup S 0 ,V 0 π P ∈P → may be considered as upper value of a fictitious game between the market and the agent.

  74. Question : Does this game have a value, i.e. does the upper value coincide with the lower max-min value π E P ( H T − V π T ) + �� � � L sup inf ? S 0 ,V 0 P ∈P Answer : (in general) yes ! → This approach requires in general a large initial capital and it does not easily allow to incorporate successive information that becomes available by observing the market.

  75. Adaptive approaches ( stochastic control under partial information; stochastic adaptive control ) • Consider parametrized families of models and update successively the knowledge about the parameters on the basis of observed prices. → Bayesian point of view : updating the knowledge of the parameters ≡ updating their distributions. → The unknown quantities may also be hidden processes ⇒ combined filtering and parameter estimation.

  76. A. A first discrete time case • Underlying market model (only one risky asset) • Start from a classical price evolution model in continuous time that we define under the physical measure P : dS t = S t [ adt + X t dw t ] with w t a Wiener process and where X t is the non directly observable volatility process (factor) . • For Y t := log S t one then has � � a − 1 2 X 2 dY t = dt + X t dw t t

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend