csci 1951 g optimization methods in finance part 10 conic
play

CSCI 1951-G Optimization Methods in Finance Part 10: Conic - PowerPoint PPT Presentation

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization April 6, 2018 1 / 34 This material is covered in the textbook, Chapters 9 and 10. Some of the materials are taken from it. Some of the figures are from S. Boyd, L.


  1. CSCI 1951-G – Optimization Methods in Finance Part 10: Conic Optimization April 6, 2018 1 / 34

  2. This material is covered in the textbook, Chapters 9 and 10. Some of the materials are taken from it. Some of the figures are from S. Boyd, L. Vandenberge’s book Convex Optimization https://web.stanford.edu/~boyd/cvxbook/ . 2 / 34

  3. Outline 1. Cones and conic optimization 2. Converting quadratic constraints into cone constraints 3. Benchmark-relative portfolio optimization 4. Semidefinite programming 5. Approximating covariance matrices 3 / 34

  4. Cones A set C is a cone if for every x ∈ C and θ ≥ 0 , θx ∈ C Example: { ( x, | x | ) , x ∈ R } ⊂ R 2 Is this set convex? 4 / 34

  5. Convex Cones A set C is a convex cone if, for every x 1 , x 2 ∈ C and θ 1 , θ 2 ≥ 0 , θ 1 x 1 + θ 2 x 2 ∈ C . Example: x 1 x 2 0 Figure 2.4 The pie slice shows all points of the form θ 1 x 1 + θ 2 x 2 , where θ 1 , θ 2 ≥ 0. The apex of the slice (which corresponds to θ 1 = θ 2 = 0) is at 0; its edges (which correspond to θ 1 = 0 or θ 2 = 0) pass through the points x 1 and x 2 . 5 / 34

  6. Conic optimization Conic optimization problem in standard form: min c T x Ax = b x ∈ C where C is a convex cone in finite-dimensional vector space X . Note: linear objective function, linear constraints. If X = R n and C = R n + , then ...we get an LP! Conic optimization is a unifying framework for • linear programming, • second-order cone programming (SOCP), • semidefinite programming (SDP). 6 / 34

  7. Norm cones Let � · � be any norm in R n − 1 . The norm cone associated to � · � is the set C = { x = ( x 1 , . . . , x n ) : x 1 ≥ � ( x 2 , . . . , x n ) �} It is a convex set. 7 / 34

  8. Second-order cone in R 3 The second-order cone is the norm cone for the Euclidean norm � · � 2 . 1 0 . 5 t 0 1 1 0 0 − 1 − 1 x 2 x 1 2 ) 1 / 2 ≤ Figure 2.10 Boundary of second-order cone in R 3 , { ( x 1 , x 2 , t ) | ( x 2 1 + x 2 t } . What happens when we slice the second-order cone? I.e., when we take the intersection with a hyperplane ? We obtain ellipsoidal sets. 8 / 34

  9. Rewriting constraints Let’s rewrite C = { x = ( x 1 , . . . , x n ) : x 1 ≥ � ( x 2 , . . . , x n ) � 2 } as x 2 1 − x 2 2 − · · · x 2 x 1 ≥ 0 , n ≥ 0 This is a combination of an linear and a quadratic constraints. Also: convex quadratic constraints can be expressed as second-order cone membership constraints. 9 / 34

  10. Rewriting constraints Qadratic constraint: x T Px + 2 q T x + γ ≤ 0 Assume P w.l.o.g. positive definite, so the constraint is ...convex. Also assume, for technical reasons, that q T Pq − γ ≥ 0 . Goal: rewrite the above constraint as a combination of linear and second-order cone membership constraints. 10 / 34

  11. Rewriting constraints Because P is positive definitive, it has a Cholesky decomposition : ∃ invertible R s.t. P = RR T . Rewrite the constraint as: ( R T x ) T ( R T x ) + 2 q T x + γ ≤ 0 Let y = ( y 1 , . . . , y n ) T = R T x + R − 1 q The above is a bijection between x and y . We are going to rewrite the constraint as a constraint on y . 11 / 34

  12. Rewriting constraints The constraint: ( R T x ) T ( R T x ) + 2 q T x + γ ≤ 0 It holds y T y = ( R T x ) T ( R T x ) + 2 q T x + q T P − 1 q Since there is a bijection between y and x , the constraint can be satisfied if and only if ∃ y s.t. y = R T x + R − 1 q, y T y ≤ q T Pq − γ 12 / 34

  13. Rewriting constraints The constraint is equivalent to: ∃ y s.t. y = R T x + R − 1 q, y T y ≤ q T Pq − γ Lets denote with y 0 the square root of the r.h.s. of the right inequality : � q T Pq − γ ∈ R + y 0 = Consider the vector ( y 0 , y 1 , . . . , y n ) . � �� � y The right inequality then is n � y 2 0 ≥ y T y = y 2 i i =1 Taking the square root on both sides: � � n � � y 2 y 0 ≥ i = � y � 2 � i =1 This is the membership constraint for the second-order cone in R n +1 . 13 / 34

  14. Rewriting constraints We rewrite the convex quadratic constraint x T Px + 2 q T x + γ ≤ 0 as ( y 1 , . . . , y n ) T = R T x + R − 1 q � y 0 = q T Pq − γ ∈ R + ( y 0 , y 1 , . . . , y n ) ∈ C which is a combination of linear and second-order cone membership constraints. 14 / 34

  15. Outline 1. Cones and conic optimization 2. Converting quadratic constraints into cone constraints 3. Benchmark-relative portfolio optimization 4. Semidefinite programming 5. Approximating covariance matrices 6. SDP and approximation algorithms 15 / 34

  16. Benchmark-relative portfolio optimization Given a benchmark strategy x B (e.g., an index ) develop a portfolio x that tracks x B , but adds value by beating it. I.e., we want a portfolio x with positive expected excess return : µ T ( x − x B ) ≥ 0 and specifically want to maximize the expected excess return. Challenge: balance expected excess return with its variance . 16 / 34

  17. Tracking error and volatility constraints The (predicted) tracking error of the portfolio x is � ( x − x B ) T Σ( x − x B ) TE ( x ) = It measure the variability of excess returns . In benchmark-relative portfolio optimization, we solve mean-variance optimization w.r.t. the expected excess return and tracking error: max µ T ( x − x B ) ( x − x B ) T Σ( x − x B ) ≤ T 2 Ax = b 17 / 34

  18. Comparison with mean-variance optimization We have seen MVO as: min 1 2 x T Σ x max µ T x − δ 2Σ x µ T x ≥ R or Ax = b Ax = b How do they differ from max µ T ( x − x B ) ( x − x B ) T Σ( x − x B ) ≤ T 2 Ax = b The later is not a standard quadratic program: it has a nonlinear constraint. 18 / 34

  19. max µ T ( x − x B ) ( x − x B ) T Σ( x − x B ) ≤ T 2 Ax = b The nonlinear constraint is ... convex quadratic We can rewrite it as a combination of linear and second-order cone membership, and solve the resulting convex conic problem. 19 / 34

  20. Outline 1. Cones and conic optimization 2. Converting quadratic constraints into cone constraints 3. Benchmark-relative portfolio optimization 4. Semidefinite programming 5. Approximating covariance matrices 6. SDP and approximation algorithms 20 / 34

  21. SemiDefinite Programming (SDP) The variables are the entries of a symmetric matrix in the cone of positive semidefinite matrices. 1 0 . 5 z 0 1 1 0 0 . 5 y − 1 0 x Figure 2.12 Boundary of positive semidefinite cone in S 2 . 21 / 34

  22. Application: approximating covariance matrices Portfolio Optimization almost always requires covariance matrices . These are not directly available, but are estimated . Estimation of covariance matrices is a very challenging task , mathematically and computationally, because the matrices must satisfy various properties (e.g., symmetry, positive semidefiniteness). To be efficient , many estimation methods do not impose problem-dependent constraints. Typically, one is interested in finding the smallest distortion of the original estimate that satisfies the desired constraints; 22 / 34

  23. Application: approximating covariance matrices • Let ˆ Σ ∈ S n be an estimate of a covariance matrix • ˆ Σ is symmetric ( ∈ S n ) but not positive semidefinite. Goal: find the positive semidefinite matrix that is closest to ˆ Σ w.r.t. the Frobenius norm : �� d F (Σ , ˆ (Σ ij − ˆ Σ ij ) 2 Σ) = i,j Formally: nearest covariance matrix problem: d F (Σ , ˆ min Σ) Σ Σ ∈ C n s where C n s is the cone of n × n symmetric and positive semidefinite matrices. 23 / 34

  24. Application: approximating covariance matrices d F (Σ , ˆ min Σ) Σ Σ ∈ C n s Introduce a dummy variable t and rewrite the problem as min t d F (Σ , ˆ Σ) ≤ t Σ ∈ C n s The first constraint can be writen as a second-order cone constraint, so the problem is transformed into a conic optimization problem. 24 / 34

  25. Application: approximating covariance matrices Variation of the problem with additional linear constraints : Let E ⊆ { ( i, j ) : 1 ≤ i ≤ n } Let ( ℓ ij , u ij ) , for ( i, j ) ∈ E be lower/upper bounds to impose to the entries. We want to solve: d F (Σ , ˆ min Σ) Σ ℓ ij < Σ ij < u ij , ∀ ( i, j ) ∈ E Σ ∈ C n s 25 / 34

  26. Application: approximating covariance matrices For example, let ˆ Σ be an estimation of a correlation matrix. Correlation matrix have all diagonal entries equal to 1. We want to solve the nearest correlation matrix problem. We choose E = { ( i, i ) , 1 ≤ i ≤ n } , ℓ i = 1 = u i , 1 ≤ i ≤ n 26 / 34

  27. Application: approximating covariance matrices Many other variants are possible: • Force some entries of ˆ Σ to remain the same in Σ ; • Weight the changes to different entries differently, because we trust some more than other; • Impose lower bounds to the minimum eigenvalue of Σ , to reduce instability; All of these can be easily solved with SDP sofware. 27 / 34

  28. Outline 1. Cones and conic optimization 2. Converting quadratic constraints into cone constraints 3. Benchmark-relative portfolio optimization 4. Semidefinite programming 5. Approximating covariance matrices 6. SDP and approximation algorithms 28 / 34

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend