multistage robust convex optimization problems a sampling
play

Multistage robust convex optimization problems: A sampling based - PowerPoint PPT Presentation

Multistage robust convex optimization problems: A sampling based approach Fabrizio Dabbene/ Francesca Maggioni/ Georg Ch. Pflug November 2019 Fabrizio Dabbene/ Francesca Maggioni/ Georg Ch. Pflug Multistage robust convex optimization problems:


  1. Multistage robust convex optimization problems: A sampling based approach Fabrizio Dabbene/ Francesca Maggioni/ Georg Ch. Pflug November 2019 Fabrizio Dabbene/ Francesca Maggioni/ Georg Ch. Pflug Multistage robust convex optimization problems: A sampling based app

  2. Multistage robust (linear) programs x 1 c ⊤ RO H + 1 :=min 1 x 1 + � � � � � � � � x 2 ( ξ 1 ) c ⊤ c ⊤ + sup min 2 ( ξ 1 ) x 2 ( ξ 1 ) + sup · · · +sup min 1 ( ξ H ) x H + ξ H 1 H + 1 ( ξ H ) ξ 1 ∈ Ξ 1 ξ 2 ∈ Ξ 2 ξ H ∈ Ξ H x H + s.t. Ax 1 = h 1 , x 1 ≥ 0 T 1 ( ξ 1 ) x 1 + W 2 ( ξ 1 ) x 2 ( ξ 1 ) = h 2 ( ξ 1 ) , ∀ ξ 1 ∈ Ξ 1 . . . T H ( ξ H ) x H ( ξ H − 1 ) + W H + 1 ( ξ H ) x H + 1 ( ξ H ) = h H + 1 ( ξ H ) , ∀ ξ H ∈ Ξ H x t ( ξ t − 1 ) ≥ 0 ∀ ξ t − 1 ∈ Ξ t − 1 ; t = 2 , . . . , H + 1 , where c 1 ∈ R n 1 and h 1 ∈ R m 1 are known vectors and A ∈ R m 1 × n 1 is a known matrix. The uncertain parameter vectors and matrices affected by the parameters ξ t ∈ Ξ t are then given by h t ∈ R m t , c t ∈ R n t , T t − 1 ∈ R m t × n t − 1 , and W t ∈ R m t × n t , t = 2 , . . . , H + 1. Ξ t are compact sets in R k t . Fabrizio Dabbene/ Francesca Maggioni/ Georg Ch. Pflug Multistage robust convex optimization problems: A sampling based app

  3. Non-anticipativity observation observation observation ξ 1 ∈ Ξ 1 ξ 2 ∈ Ξ 2 ξ 3 ∈ Ξ 3 ❄ ❄ ❄ ❄ t = t 1 t = t 2 t = t 3 t = 0 decision decision decision decision x 1 x 2 x 3 x 4 Fabrizio Dabbene/ Francesca Maggioni/ Georg Ch. Pflug Multistage robust convex optimization problems: A sampling based app

  4. Replacing a huge infinite constraint set by a finite random extraction of the constraints Consider the problem � � c ⊤ x : sup : min f ( x , ξ ) ≤ 0 , (1) RO x ∈ X ξ ∈ Ξ where x ∈ X ⊆ R n is the optimization variable, X is convex and closed and x �→ f ( x , ξ ) : X × Ξ → R is convex for all ξ ∈ Ξ. Suppose that Ξ is compact and P is a probability measure on it with nonvanishing density. Let ξ (1) , . . . , ξ ( N ) be independent samples from Ξ, sampled according to P N = P × · · · × P . The “scenario” approximation of problem (1) is defined as follows � � SO N c ⊤ x : max 1 ≤ i ≤ N f ( x , ξ ( i ) ) ≤ 0 : min , (2) x ∈ X Problem ( SO N ) is a random problem and its solution is random. However it is solvable with standard solvers. Fabrizio Dabbene/ Francesca Maggioni/ Georg Ch. Pflug Multistage robust convex optimization problems: A sampling based app

  5. The violation probability Many authors have studied the approximation quality of (2) to the basic problem (1) coming up with convergence speed, cental limit type theorems and laws of large numbers. It was the idea of Calafiore and Campi to look at the quality of the approximation in a different way, namely by studying the ”violation probability distribution”. The “violation probability” of the sample � ξ (1) , . . . , ξ ( N ) � Ξ N := ˆ is defined as � � � � ξ ( N +1) : min V (ˆ Ξ N ) := P c ⊤ x : 1 ≤ i ≤ N +1 f ( x , ξ ( i ) ) ≤ 0 > v ( SO N ) max , x ∈ X where also ξ ( N +1) is sampled from P . Here v ( SO N ) is the optimal value of problem SO N ). Notice that V is a random variable taking its values in [0 , 1]. Fabrizio Dabbene/ Francesca Maggioni/ Georg Ch. Pflug Multistage robust convex optimization problems: A sampling based app

  6. Bounding the distribution of the violation probability Theorem. [CCG Theorem, Calafiore (2010) and Campi/Garatti (2008)]. The distribution of V under P is stochastically smaller (in the first order) than a random variable Y N , n , which has the following compound distribution  � N � − 1     0 , with probability 1 − n Y N , n = � N � − 1    Z N , n , with probability ,  n where Z N , n has a Beta ( n , N − n + 1) distribution, that is for ǫ > 0 � 1 (1 − v ) N − n v n − 1 dv =: B ( N , ǫ, n ) . P { V (ˆ Ξ N ) >ǫ } ≤ P { Y N , n >ǫ } = n ǫ Fabrizio Dabbene/ Francesca Maggioni/ Georg Ch. Pflug Multistage robust convex optimization problems: A sampling based app

  7. These authors also show that � N � n � ǫ j (1 − ǫ ) N − j . B ( N , ǫ, n ) = j j =0 For any probability level ǫ ∈ (0 , 1) and confidence level β ∈ (0 , 1), let   � N �   n � ǫ j (1 − ǫ ) N − j ≤ β N ( ǫ, β ) := min  N ∈ N :  . j j =0 Then N ( ǫ, β ) is a sample size which guarantees that the ǫ -violation probability lies below β . Fabrizio Dabbene/ Francesca Maggioni/ Georg Ch. Pflug Multistage robust convex optimization problems: A sampling based app

  8. The CCG Theorem can also be applied to the problem � � min x sup g ( x , ξ ) : x ∈ X ( ξ ) , (3) ξ ∈ Ξ where x �→ g ( x , ξ ) is convex and X ( ξ ) are convex sets for all ξ ∈ Ξ. Set f ( x , ξ ) = g ( x , ξ ) + ψ X ( ξ ) ( x ) , where ψ is the indicator function � 0 if x ∈ B ψ B ( x ) := ∞ otherwise. Then f is convex in x and (3) can be written as min x sup f ( x , ξ ) . ξ ∈ Ξ Finally, observe that this problem is equivalent to � � min γ : sup f ( x , ξ ) − γ ≤ 0 . x ,γ ξ ∈ Ξ This problem is of the standard form. In this case, the dimension of the decision variable is n + 1. Fabrizio Dabbene/ Francesca Maggioni/ Georg Ch. Pflug Multistage robust convex optimization problems: A sampling based app

  9. An Example illustrating the violation probability The original problem: � � Maximize x � � x 2 + y 2 ≤ 1 subject to The reformulation as a problem with an infinite number of linear constraints: � � Maximize x � � subject to x cos( ξ ) + y sin( ξ ) ≤ 1 for all 0 ≤ ξ ≤ 2 π The randomly sampled problem, ξ ( i ) ∼ Uniform [0 , 2 π ] : � � Maximize x � � x cos( ξ ( i ) ) + y sin( ξ ( i ) ) ≤ 1 subject to for i = 1 , . . . , N Fabrizio Dabbene/ Francesca Maggioni/ Georg Ch. Pflug Multistage robust convex optimization problems: A sampling based app

  10. Illustration N = 5 N = 10 The random violation probability is represented by the blue arc length (relative to the total circumference 2 π ). Fabrizio Dabbene/ Francesca Maggioni/ Georg Ch. Pflug Multistage robust convex optimization problems: A sampling based app

  11. Extending the notion of violation probability to the multistage case In the multistage situation, we have to respect the non-anticipativity conditions. Based on a finite random selection ˆ 1 , . . . , ˆ Ξ N 1 Ξ N H H { ξ (1) 1 , . . . , ξ ( N 1 ) Ξ N 1 ˆ = } , 1 1 { ξ (1) 2 , . . . , ξ ( N 2 ) Ξ N 2 ˆ = } , 2 2 . . . { ξ (1) H , . . . , ξ ( N H ) Ξ N H ˆ = } , H H T N 1 ,..., N H , where { ξ (1) 1 , . . . , ξ ( N 1 ) we generate a random tree ˆ } are 1 the successors of the root, and recursively all nodes at stage t get all values from Ξ N t as successors. Notice that the number of nodes t N t := � t at stage t + 1 of the tree is ¯ s =1 N s . The total number of nodes of the tree is N tot := 1 + � H i =1 ¯ N i . Fabrizio Dabbene/ Francesca Maggioni/ Georg Ch. Pflug Multistage robust convex optimization problems: A sampling based app

  12. The violation probability in the multistage situation The violation probability V t at stage t is defined in the following way. Given the random tree ˆ T N 1 ,..., N H , suppose that we sample an additional element ξ ( N t +1 ) in Ξ t and form the extended tree t ˆ T N 1 ,..., N t +1 ,..., N H . Then T N 1 ,..., N H ) = P { ξ ( N t +1) V t ( ˆ : v ( ˆ T N 1 ,..., N t +1 ,..., N H ) > v ( ˆ T N 1 ,..., N H ) } . t Here v ( T ) is the value of the multistage optimization problem on the tree T . Fabrizio Dabbene/ Francesca Maggioni/ Georg Ch. Pflug Multistage robust convex optimization problems: A sampling based app

  13. Illustration ✏ 0.8 ✏✏✏ PPP ✓ 0.8 P 0.3 × ◦ ◦ ✓ 0.8 ✓ ✓ × ◦ ◦ ✏ 0.8 ✏✏✏ 0.6 ✓ PPP 0.6 ❙ P 0.3 ❙ ❙ × ◦ ◦ ✏ 0.8 ✏✏✏ 0.2 ❙ ❙ 0.2 PPP P 0.3 × × 0.3 0.8 The original sampled tree ˆ T 3 , 2 . Fabrizio Dabbene/ Francesca Maggioni/ Georg Ch. Pflug Multistage robust convex optimization problems: A sampling based app

  14. ✏ ✏✏✏ 0.8 PPP 0.8 ✔ P ✔ 0.3 × ◦ ◦ ✏ ✔ ✏✏✏ 0.8 0.8 ✔ PPP ✑ 0.6 ✑✑✑ P ✔ × ◦ ◦ 0.6 0.3 ✔ ◗◗◗ ❚ ∗ • • ✏ ✏✏✏ 0.4 0.8 ❚ ◗ PPP ❚ 0.4 P × ◦ ◦ 0.2 ❚ 0.3 ✏ ❚ ✏✏✏ 0.8 ❚ PPP × × × 0.2 P 0.3 0.8 0.3 The randomly extended tree ˆ T 4 , 2 . A new observation in stage 1 is added. The new nodes are in bold. Fabrizio Dabbene/ Francesca Maggioni/ Georg Ch. Pflug Multistage robust convex optimization problems: A sampling based app

  15. ✏ ✏✏✏ 0.8 PPP 0.8 0.5 ✔ P ✔ 0.3 × ◦ • ◦ ✔ 0.8 ✔ ✔ × ◦ • ◦ ✏ ✏✏✏ 0.6 0.8 ✔ PPP 0.6 0.5 ❚ P 0.3 ❚ ❚ × ◦ • ◦ 0.2 ❚ ✏ ❚ ✏✏✏ 0.8 ❚ PPP × ∗ × × 0.2 0.5 P 0.3 0.5 0.8 0.3 The randomly extended tree ˆ T 3 , 3 . A new observation in stage 2 is added. The new nodes are in bold. Fabrizio Dabbene/ Francesca Maggioni/ Georg Ch. Pflug Multistage robust convex optimization problems: A sampling based app

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend