constant factor approximation algorithms for the minmax
play

Constant-factor approximation algorithms for the minmax regret - PowerPoint PPT Presentation

Constant-factor approximation algorithms for the minmax regret problem Juan Pablo Fern andez G. n Cra 87 N o 30 - 65, Colombia Universidad de Medell e-mail : jpfernandez@udem.edu.co Adviser : Eduardo Conde Universidad de Sevilla, Espa


  1. Constant-factor approximation algorithms for the minmax regret problem Juan Pablo Fern´ andez G. ın Cra 87 N o 30 - 65, Colombia Universidad de Medell´ e-mail : jpfernandez@udem.edu.co Adviser : Eduardo Conde Universidad de Sevilla, Espa˜ na. Doc-course Mayo 21, 2010

  2. Introduction and existing results 1 The minmax regret approach for parameter optimization problems. Results from general complexity Some approximated algorithms of constant factor General Result of 2-approximation 2 Applications 3 The sequencing problem n / 1 // F . Finding compromise solution in a linear multiple objective problem. Facility Location under uncertain demand. Bibliography

  3. Introduction and existing results 1 The minmax regret approach for parameter optimization problems. Results from general complexity Some approximated algorithms of constant factor General Result of 2-approximation 2 Applications 3 The sequencing problem n / 1 // F . Finding compromise solution in a linear multiple objective problem. Facility Location under uncertain demand. Bibliography

  4. Introduction and existing results 1 The minmax regret approach for parameter optimization problems. Results from general complexity Some approximated algorithms of constant factor General Result of 2-approximation 2 Applications 3 The sequencing problem n / 1 // F . Finding compromise solution in a linear multiple objective problem. Facility Location under uncertain demand. Bibliography

  5. Definition Given an optimization problem with parameter cost function Opt ( w ) = min x ∈ X F ( x , w ) where the parameter w ∈ W an hyperrectangle in R n and X ⊆ R n is a compact feasible set. How can i choose x under unknown scenario w ?

  6. Definition Definition (minmax regret criterion) The minimization of the maximum absolute regret problem can be expressed as min x ∈ X Z ( x ) where Z ( x ) = max w ∈ W R ( x , w ) the worst-case regret and R ( x , w ) = F ( x , w ) − min y ∈ X F ( y , w ) the regret assigned to the feasible solution x under scenario w .

  7. Robust? 1 Let x ⋆ be a minmax regret solution. 2 if w H was the scenario that take place after the decision x ⋆ has been implemented. � w H � 3 Let y H be the solution of Opt then F ( x ⋆ , w H ) − F ( y H , w H ) ≤ ǫ where ǫ = Z ( x ⋆ ).

  8. Minmax regret complexity I In [4], it is described one of the classical combinatorial problem as Definition Elements of relative robust shortest path problem (RRSPP): G = ( V , A ), directed arc weighted graph. V , node set, | V | = n . A , arc set, | A | = m . � � l ij , l ij , ( i , j ) ∈ A . Lengths (weights) of the arcs are intervals which express ranges of possible realizations of lengths. No probability distribution is assumed for arc lengths.

  9. Minmax regret complexity I Definition Elements of RRSPP: � � Length l w ij ∈ l ij , l ij is assigned for each ( i , j ) ∈ A , is called a scenario w , where l w ij denotes the length of arc ( i , j ) in scenario w . P , denotes the set of all the paths in G from o to d . � l w l w p = ij denotes the length of a path p ∈ P in scenario w . ( i , j ) ∈ p W , denote the set of possible scenarios.

  10. Minmax regret complexity I applying the minmax regret concept: Definition Element of RRSPP: d w p = l w p − l w p ⋆ ( w ) the regret for the path p in scenario w , where p ⋆ ( w ) ∈ P is the shortest path in scenario w . w ∈ W d w Z p = max p is the maximum regret. w ∈ W l w p − l w min p ∈ P Z p = min p ∈ P max (1) p ⋆ ( w ) can equivalently define the problem RRSPP.

  11. Minmax regret complexity I For the problem (1), it is proved: Theorem 1 (1) is NP-hard. 2 Decision- (1) is NP-complete, even if G is restricted to a planar acyclic graph with node degree three. 3 (1) is NP-hard, even if G is restricted to a planar acyclic graph with node degree three.

  12. Minmax regret complexity II In [6], it is described another combinatorial problem as Definition Elements of minimizing the total flow time in a scheduling problem with interval data (MTFT) via minmax regret criterion: J , | J | = n , n ≥ 2, set of jobs that have to be processed on a single machine. The machine cannot process more than one job at any time. � � p k = � p k , p k , J k ∈ J . Then the processing times are intervals which express ranges of possible processing times for the jobs. p w k ∈ � p k processing time of jobs J k ∈ J is called a scenario .

  13. Minmax regret complexity II Definition Elements of MTFT via minmax regret criterion: W being the Cartesian product of all � p k . The set of all scenarios. π = ( π (1) , . . . , π ( n )), a schedule of job. Π, the set of all feasible schedules. The total flow time in π under w is n � ( n − k + 1) p w F ( π, w ) = π ( k ) . (2) k =1

  14. Minmax regret complexity II applying the minmax regret concept: Definition Element of MTFT: R ( π, w ) = F ( π, w ) − F ⋆ ( w ) the regret assigned to the schedule π in scenario w , where F ⋆ ( w ) = min y ∈ Π F ( y , w ) is the flow for the shortest processing time schedule under the scenario w . Z ( π ) = max w ∈ W R ( π, w ) is the maximum regret. min π ∈ Π Z ( π ) (3) The minmax regret version of Problem MTFT.

  15. Minmax regret complexity II Definition (Problem ROB1) Problem ROB1 is the special case of problem (3) where all intervals of p k + p k uncertainty have the same center, that is, is the same for all J k ∈ J . 2 Definition Let J l , J k ∈ J be jobs. Job J l is wider than job J k if � p k ⊂ � p l .

  16. Minmax regret complexity II Definition For any job J k ∈ J and schedule π ∈ Π, let q ( π, J k ) = min { n − π ( k ) , π ( k ) − 1 } A permutation π ∈ Π is called uniform if for any J l , J k ∈ J , if J l is wider than J k , then q ( π, J l ) ≥ q ( π, J k ).

  17. Minmax regret complexity II Theorem 1 if the number of jobs n is even, then any uniform permutation is an optimal solution to problem ROB1 (and therefore problem ROB1 with even number of jobs is solvable in O ( n log n ) time). 2 Problem ROB1 with odd number of jobs is NP-hard. 3 Problem (3) is NP-hard; it remains NP-hard even if the number of jobs is even.

  18. Some approximated algorithms of constant factor Definition (Elements of the problem.) E = { e 1 , e 2 , . . . , e n } a finite set. Φ ⊆ 2 E a feasible solutions set. � c e = [ c e , c e ], e ∈ E a range of possible values of the cost. w = ( c w e ) e ∈ E a particular vector assignment of costs c w e to elements e ∈ E is called scenario . W being the Cartesian product of all � c k . The set of all scenarios.

  19. Special combinatorial optimization Definition (Problem formulation.) F ( χ, w ) = � c w e . Its cost function for a given solution χ ∈ Φ, e ∈ χ under a fixed scenario w ∈ W . R ( χ, w ) = F ( χ, w ) − F ⋆ ( w ) the regret assigned to feasible solution χ in scenario w , where F ⋆ ( w ) = min y ∈ Φ F ( y , w ) is the value of the cost of the optimal solution under scenario w . Z ( χ ) = max w ∈ W R ( χ, w ) is the maximum regret. min χ ∈ Φ Z ( χ ) (4)

  20. Special combinatorial optimization Using the worst case characterization, we obtain bound for Z ( χ ) and then Theorem � � c e + c e Let M be the solution of min x ∈ Φ F ( x , w ) where w = e ∈ E . Then 2 for every χ ∈ Φ it holds Z ( M ) ≤ 2 Z ( χ ) . In particular, if χ ⋆ is the solution of (4), then Z ( M ) ≤ 2 Z ( χ ⋆ ) . M is known as the mid-point solution, and this w is the mid-point scenario.

  21. Classical formulation of sequencing We return to the problems of n jobs to be processed on a single machine, but now, we consider it with precedence constrains. Definition It is used the following notation. n jobs for being processing in only one machine. The subscripts i refers to job J i . The subscripts k refers to position which is processed a particular job. The following data pertain to job J i . 1 p i the processing time of the job J i . � 1 if J i is proccesed in the position th-k 2 x ik = 0 otherwise

  22. Classical formulation of sequencing Definition 3 C i is the time to finish the processing of the job J i . 4 C i ( k ) time of completion of the job J i in the th- k process. calculating � n The completion time of the job J i is C i ( k − 1) + p i x ik . And i =1

  23. Classical formulation of sequencing: The above 2-approximation results can not be applied. Integer programming � n � k � n min p i x ij k =1 j =1 i =1 subject to n � x ik = 1 for i = 1 , . . . , n . k =1 k − 1 � x qk − x pj ≤ 0 for p , q such that job J p precedes job J q . j =1 x ik ∈ { 0 , 1 } for i , k = 1 , . . . , n .

  24. Sequencing optimization problem For the sake of simplicity, we will denote by i π the position occupying by job J i in the schedule π . So, the total flow time function becomes � n ( n − i π + 1) p w F ( π, w ) = i i =1 and

  25. Sequencing optimization problem Property For any two feasible schedules π, σ and scenario w ∈ W , 1 � n ( i σ − i π ) p w F ( π, w ) − F ( σ, w ) = i . i =1 2 � � Z ( π ) ≥ ( i σ − i π ) p i + ( i σ − i π ) p i { i : i σ > i π } { i : i σ < i π } 3 � � Z ( σ ) ≤ Z ( π ) + ( i π − i σ ) p i + ( i π − i σ ) p i { i : i π > i σ } { i : i π < i σ }

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend