introduction to convex optimization
play

Introduction to Convex Optimization Xuezhi Wang Computer Science - PowerPoint PPT Presentation

Convexity Unconstrained Convex Optimization Constrained Optimization Introduction to Convex Optimization Xuezhi Wang Computer Science Department Carnegie Mellon University 10701-recitation, Jan 29 Introduction to Convex Optimization


  1. Convexity Unconstrained Convex Optimization Constrained Optimization Introduction to Convex Optimization Xuezhi Wang Computer Science Department Carnegie Mellon University 10701-recitation, Jan 29 Introduction to Convex Optimization

  2. Convexity Unconstrained Convex Optimization Constrained Optimization Outline Convexity 1 Convex Sets Convex Functions Unconstrained Convex Optimization 2 First-order Methods Newton’s Method Constrained Optimization 3 Primal and dual problems KKT conditions Introduction to Convex Optimization

  3. Convexity Convex Sets Unconstrained Convex Optimization Convex Functions Constrained Optimization Outline Convexity 1 Convex Sets Convex Functions Unconstrained Convex Optimization 2 First-order Methods Newton’s Method Constrained Optimization 3 Primal and dual problems KKT conditions Introduction to Convex Optimization

  4. Convexity Convex Sets Unconstrained Convex Optimization Convex Functions Constrained Optimization Convex Sets Definition For x , x ′ ∈ X it follows that λ x + ( 1 − λ ) x ′ ∈ X for λ ∈ [ 0 , 1 ] Examples Empty set ∅ , single point { x 0 } , the whole space R n Hyperplane: { x | a ⊤ x = b } , halfspaces { x | a ⊤ x ≤ b } Euclidean balls: { x | || x − x c || 2 ≤ r } + = { A ∈ S n | A � 0 } ( S n is Positive semidefinite matrices: S n the set of symmetric n × n matrices) Introduction to Convex Optimization

  5. Convexity Convex Sets Unconstrained Convex Optimization Convex Functions Constrained Optimization Convexity Preserving Set Operations Convex Set C , D Translation { x + b | x ∈ C } Scaling { λ x | x ∈ C } Affine function { Ax + b | x ∈ C } Intersection C ∩ D Set sum C + D = { x + y | x ∈ C , y ∈ D } Introduction to Convex Optimization

  6. Convexity Convex Sets Unconstrained Convex Optimization Convex Functions Constrained Optimization Outline Convexity 1 Convex Sets Convex Functions Unconstrained Convex Optimization 2 First-order Methods Newton’s Method Constrained Optimization 3 Primal and dual problems KKT conditions Introduction to Convex Optimization

  7. Convexity Convex Sets Unconstrained Convex Optimization Convex Functions Constrained Optimization Convex Functions dom f is convex, λ ∈ [ 0 , 1 ] λ f ( x ) + ( 1 − λ ) f ( y ) ≥ f ( λ x + ( 1 − λ ) y ) First-order condition : if f is differentiable, f ( y ) ≥ f ( x ) + ∇ f ( x ) ⊤ ( y − x ) Second-order condition : if f is twice differentiable, ∇ 2 f ( x ) � 0 Strictly convex : ∇ 2 f ( x ) ≻ 0 Strongly convex : ∇ 2 f ( x ) � dI with d > 0 Introduction to Convex Optimization

  8. Convexity Convex Sets Unconstrained Convex Optimization Convex Functions Constrained Optimization Convex Functions Below-set of a convex function is convex: f ( λ x + ( 1 − λ ) y ) ≤ λ f ( x ) + ( 1 − λ ) f ( y ) hence λ x + ( 1 − λ ) y ∈ X for x , y ∈ X Convex functions don’t have local minima : Proof by contradiction: linear interpolation breaks local minimum condition Convex Hull : x = � α i x i where α i ≥ 0 and � α i = 1 } Conv ( X ) = { ¯ x | ¯ Convex hull of a set is always a convex set Introduction to Convex Optimization

  9. Convexity Convex Sets Unconstrained Convex Optimization Convex Functions Constrained Optimization Convex Functions examples Exponential. e ax convex on R , any a ∈ R Powers. x a convex on R ++ when a ≥ 1 or a ≤ 0, and concave for 0 ≤ a ≤ 1. Powers of absolute value. | x | p for p ≥ 1, convex on R . Logarithm. log x concave on R ++ . Norms. Every norm on R n is convex. f ( x ) = max { x 1 , ..., x n } convex on R n Log-sum-exp. f ( x ) = log ( e x 1 + ... + e x n ) convex on R n . Introduction to Convex Optimization

  10. Convexity Convex Sets Unconstrained Convex Optimization Convex Functions Constrained Optimization Convexity Preserving Function Operations Convex function f ( x ) , g ( x ) Nonnegative weighted sum: af ( x ) + bg ( x ) Pointwise Maximum: f ( x ) = max { f 1 ( x ) , ..., f m ( x ) } Composition with affine function: f ( Ax + b ) Composition with nondecreasing convex g : g ( f ( x )) Introduction to Convex Optimization

  11. Convexity First-order Methods Unconstrained Convex Optimization Newton’s Method Constrained Optimization Outline Convexity 1 Convex Sets Convex Functions Unconstrained Convex Optimization 2 First-order Methods Newton’s Method Constrained Optimization 3 Primal and dual problems KKT conditions Introduction to Convex Optimization

  12. Convexity First-order Methods Unconstrained Convex Optimization Newton’s Method Constrained Optimization Gradient Descent given a starting point x ∈ dom f . repeat 1. ∆ x := −∇ f ( x ) 2. Choose step size t via exact or backtracking line search. 3. update. x := x + t ∆ x . Until stopping criterion is satisfied. Key idea Gradient points into descent direction Locally gradient is good approximation of objective function Gradient Descent with line search Get descent direction Unconstrained line search Exponential convergence for strongly convex objective Introduction to Convex Optimization

  13. Convexity First-order Methods Unconstrained Convex Optimization Newton’s Method Constrained Optimization Convergence Analysis Assume ∇ f is L -Lipschitz continuous, then gradient descent with fixed step size t ≤ 1 / L has convergence rate O ( 1 / k ) i.e., to get f ( x ( k ) ) − f ( x ∗ ) ≤ ǫ , need O ( 1 /ǫ ) iterations Assume strong convexity holds for f , i.e., ∇ 2 f ( x ) � dI and ∇ f is L -Lipschitz continuous, then gradient descent with fixed step size t ≤ 2 / ( d + L ) has convergence rate O ( c k ) , where c ∈ ( 0 , 1 ) , i.e., to get f ( x ( k ) ) − f ( x ∗ ) ≤ ǫ , need O ( log ( 1 /ǫ )) iterations Introduction to Convex Optimization

  14. Convexity First-order Methods Unconstrained Convex Optimization Newton’s Method Constrained Optimization Outline Convexity 1 Convex Sets Convex Functions Unconstrained Convex Optimization 2 First-order Methods Newton’s Method Constrained Optimization 3 Primal and dual problems KKT conditions Introduction to Convex Optimization

  15. Convexity First-order Methods Unconstrained Convex Optimization Newton’s Method Constrained Optimization Newton’s method Convex objective function f Nonnegative second derivative ∂ 2 x f ( x ) � 0 Taylor expansion f ( x + δ ) = f ( x ) + δ ⊤ ∂ x f ( x ) + 1 2 δ ⊤ ∂ 2 x f ( x ) δ + O ( δ 3 ) Minimize approximation & iterate til converged x ← x − [ ∂ 2 x f ( x )] − 1 ∂ x f ( x ) Introduction to Convex Optimization

  16. Convexity First-order Methods Unconstrained Convex Optimization Newton’s Method Constrained Optimization Convergence Analysis Two Convergence regimes As slow as gradient descent outside the region where Taylor expansion is good || ∂ x f ( x ∗ ) − ∂ x f ( x ) − � x ∗ − x , ∂ 2 x f ( x ) �|| ≤ γ || x ∗ − x || 2 Quadratic convergence once the bound holds || x n + 1 − x ∗ || ≤ γ || [ ∂ 2 x f ( x n )] − 1 || || x n − x ∗ || 2 Introduction to Convex Optimization

  17. Convexity Primal and dual problems Unconstrained Convex Optimization KKT conditions Constrained Optimization Outline Convexity 1 Convex Sets Convex Functions Unconstrained Convex Optimization 2 First-order Methods Newton’s Method Constrained Optimization 3 Primal and dual problems KKT conditions Introduction to Convex Optimization

  18. Convexity Primal and dual problems Unconstrained Convex Optimization KKT conditions Constrained Optimization Constrained Optimization Primal problem : x ∈ R n f ( x ) min subject to h i ( x ) ≤ 0 , i = 1 , . . . , m l j ( x ) = 0 , j = 1 , . . . , r Lagrangian : m r � � L ( x , u , v ) = f ( x ) + u i h i ( x ) + v j l j ( x ) i = 1 j = 1 where u ∈ R m , v ∈ R r , and u ≥ 0. Lagrange dual function : g ( u , v ) = min x ∈ R n L ( x , u , v ) Introduction to Convex Optimization

  19. Convexity Primal and dual problems Unconstrained Convex Optimization KKT conditions Constrained Optimization Constrained Optimization Dual problem : u , v g ( u , v ) max subject to u ≥ 0 Dual problem is a convex optimization problem, since g is always concave (even if primal problem is not convex) The primal and dual optimal values always satisfy weak duality: f ∗ ≥ g ∗ Slater’s condition : for convex primal, if there is an x such that h 1 ( x ) < 0 , ..., h m ( x ) < 0 and l 1 ( x ) = 0 , ..., l r ( x ) = 0 then strong duality holds: f ∗ = g ∗ . Introduction to Convex Optimization

  20. Convexity Primal and dual problems Unconstrained Convex Optimization KKT conditions Constrained Optimization Outline Convexity 1 Convex Sets Convex Functions Unconstrained Convex Optimization 2 First-order Methods Newton’s Method Constrained Optimization 3 Primal and dual problems KKT conditions Introduction to Convex Optimization

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend