dynamic programming
play

Dynamic Programming Kevin Zatloukal July 18, 2011 Motivation - PowerPoint PPT Presentation

Dynamic Programming Kevin Zatloukal July 18, 2011 Motivation Dynamic programming deserves special attention: Motivation Dynamic programming deserves special attention: technique you are most likely to use in practice Motivation Dynamic


  1. Dynamic Programming Kevin Zatloukal July 18, 2011

  2. Motivation Dynamic programming deserves special attention:

  3. Motivation Dynamic programming deserves special attention: ◮ technique you are most likely to use in practice

  4. Motivation Dynamic programming deserves special attention: ◮ technique you are most likely to use in practice ◮ the (few) novel algorithms I’ve invented used it

  5. Motivation Dynamic programming deserves special attention: ◮ technique you are most likely to use in practice ◮ the (few) novel algorithms I’ve invented used it ◮ dynamic programming algorithms are ubiquitous in CS

  6. Motivation applications of dynamic programming in CS compilers parsing general context-free grammar, optimal code generation machine learning speech recognition databases query optimization graphics optimal polygon triangulation networks routing applications spell checking, file diffing, document layout, regular expression matching

  7. Motivation Dynamic programming deserves special attention: ◮ technique you are most likely to use in practice ◮ the (few) novel algorithms I’ve invented used it ◮ dynamic programming algorithms are ubiquitous in CS ◮ more robust than greedy to changes in problem definition

  8. Motivation Dynamic programming deserves special attention: ◮ technique you are most likely to use in practice ◮ the (few) novel algorithms I’ve invented used it ◮ dynamic programming algorithms are ubiquitous in CS ◮ more robust than greedy to changes in problem definition ◮ actually simpler than greedy

  9. Motivation Dynamic programming deserves special attention: ◮ technique you are most likely to use in practice ◮ the (few) novel algorithms I’ve invented used it ◮ dynamic programming algorithms are ubiquitous in CS ◮ more robust than greedy to changes in problem definition ◮ actually simpler than greedy ◮ (usually) easy correctness proofs and implementation

  10. Motivation Dynamic programming deserves special attention: ◮ technique you are most likely to use in practice ◮ the (few) novel algorithms I’ve invented used it ◮ dynamic programming algorithms are ubiquitous in CS ◮ more robust than greedy to changes in problem definition ◮ actually simpler than greedy ◮ (usually) easy correctness proofs and implementation ◮ easy to optimize

  11. Motivation Dynamic programming deserves special attention: ◮ technique you are most likely to use in practice ◮ the (few) novel algorithms I’ve invented used it ◮ dynamic programming algorithms are ubiquitous in CS ◮ more robust than greedy to changes in problem definition ◮ actually simpler than greedy ◮ (usually) easy correctness proofs and implementation ◮ easy to optimize In short, it’s simpler, more general, and more often useful.

  12. What is Dynamic Programming? Key is to relate the solution of the whole problem and the solutions of subproblems. – a subproblem is a problem of the same type but smaller size – e.g., solution for whole tree to solutions on each subtree Same is true of divide & conquer, but here the subproblems need not be disjoint. – they need not divide the input (i.e., they can “overlap”) – divide & conquer is a special case of dynamic programming A dynamic programming algorithm computes the solution of every subproblem needed to build up the solution for the whole problem. – compute each solution using the above relation – store all the solutions in an array (or matrix) – algorithm simply fills in the array entries in some order

  13. Example 1: Weighted Interval Scheduling Recall Interval Scheduling In the Interval Scheduling problem, we were given a set of intervals I = { ( s i , f i ) | i = 1 , . . . n } , with start and finish times s i and f i . Our goal was to find a subset J ⊂ I such that – no two intervals in J overlap and – | J | is as large as possible Greedy worked by picking the remaining interval that finishes first. – This gives the blue intervals in the example.

  14. Example 1: Weighted Interval Scheduling Problem Definition In the Weighted Interval Scheduling problem, we are given the set I along with a set of weights { w i } . Now, we wish to find the subset J ⊂ I such that – no two intervals in J overlap and – � i ∈ J w i is as large as possible For example, if we add weights to our picture, we get a new solution shown in blue. 2 1 1 1 1

  15. Example 1: Weighted Interval Scheduling Don’t Be Greedy As this example shows, the greedy algorithm no longer works. – greedy throws away intervals regardless of their weights Furthermore, no simple variation seems to fix this. – we know of no greedy algorithm for solving this problem As we will now see, this can be solved by dynamic programming. – dynamic programming is more general – we will see another example of this later on

  16. Example 1: Weighted Interval Scheduling Relation Let OPT( I ′ ) denote the value of the optimal solution of the problem with intervals chosen from I ′ ⊂ I . Consider removing the last interval ℓ n = ( s n , f n ) ∈ I . – How does OPT( I ) relate to OPT( I − { ℓ n } )? – OPT( I − { ℓ n } ) is the value of the optimal solution that does not use ℓ n .

  17. Example 1: Weighted Interval Scheduling Relation Let OPT( I ′ ) denote the value of the optimal solution of the problem with intervals chosen from I ′ ⊂ I . Consider removing the last interval ℓ n = ( s n , f n ) ∈ I . – How does OPT( I ) relate to OPT( I − { ℓ n } )? – OPT( I − { ℓ n } ) is the value of the optimal solution that does not use ℓ n . – What is the value of the optimal solution that does use ℓ n ? – It must be w n + OPT( I − conflicts( ℓ n )), where conflicts( ℓ n ) is the set of intervals overlapping ℓ n . (Why?)

  18. Example 1: Weighted Interval Scheduling Relation Let OPT( I ′ ) denote the value of the optimal solution of the problem with intervals chosen from I ′ ⊂ I . Consider removing the last interval ℓ n = ( s n , f n ) ∈ I . – How does OPT( I ) relate to OPT( I − { ℓ n } )? – OPT( I − { ℓ n } ) is the value of the optimal solution that does not use ℓ n . – What is the value of the optimal solution that does use ℓ n ? – It must be w n + OPT( I − conflicts( ℓ n )), where conflicts( ℓ n ) is the set of intervals overlapping ℓ n . – Hence, we must have: OPT( I ) = max { OPT( I − { ℓ n } ) , w n + OPT( I − conflicts( ℓ n )) } .

  19. Example 1: Weighted Interval Scheduling Relation (cont.) We can simplify this by looking at conflicts( ℓ n ) in more detail: – conflicts( ℓ n ) is the set of finishing after ℓ n starts. – If we sort I by finish time, then these are a suffix. ℓ n Let p ( s n ) denote the index of the first interval finishing after s n . – conflicts( ℓ n ) = { ℓ p ( s n ) , . . . , ℓ n } – I − { ℓ n } = { ℓ 1 , . . . , ℓ n − 1 } – I − conflicts( ℓ n ) = { ℓ 1 , . . . , ℓ p ( s n ) − 1 } Let OPT( k ) = OPT( { ℓ 1 , . . . , ℓ k } ). Then we have OPT( n ) = max { OPT( n − 1) , w n + OPT( p ( s n ) − 1) } .

  20. Example 1: Weighted Interval Scheduling Pseudcode Store the values of OPT in an array opt-val . – start out with OPT(0) = 0 – fill in rest of the array using the relation Schedule-Weighted-Intervals ( start , finish , weight , n ) 1 sort start , finish , weight by finish opt-val ← New-Array () 2 3 opt-val [0] ← 0 4 for i ← 1 to n 5 do j ← Binary-Search ( start [ i ] , finish , n ) 6 opt-val [ i ] ← max { opt-val [ i − 1] , weight [ i ] + opt-val [ j − 1] } 7 return opt-val [ n ] Running time is clearly O ( n log n ).

  21. Example 1: Weighted Interval Scheduling Observations ◮ This is efficient primarily because of the special structure of conflicts( ℓ n ). (Depends on ordering the intervals.) If we had to compute OPT( J ) for every J ⊂ I , the algorithm would run in Ω(2 n ) time. ◮ This is still mostly a brute-force search. We excluded only solutions that are suboptimal on subproblems. ◮ Dynamic programming always works, but it is not always efficient. (Textbooks calls it “dynamic programming” only when it is efficient.) ◮ It is hopefully intuitive that dynamic programming often gives efficient algorithms when greedy does not work.

  22. Example 1: Weighted Interval Scheduling Finding the Solution (Not Just Its Value) Often we want the actual solution, not just its value. The simplest idea would be to create another array opt-set, such that opt-set[ k ] stores the set of intervals with weight opt-val[ k ]. – each set might be Θ( n ) size – so the algorithm might now be Θ( n 2 ) Instead, we can just record enough information to figure out whether each ℓ n was in the optimal solution or not. – but this is in the opt-val array already – ℓ i is included iff OPT( i ) = w i + OPT( p ( i ) − 1) or equivalently iff OPT( i ) > OPT( i − 1)

  23. Example 1: Weighted Interval Scheduling Finding the Solution (Not Just Its Value) (cont) Optimal-Weighted-Intervals ( opt-val , n ) opt-set ← ∅ 8 9 i ← n 10 while i > 0 11 do if opt-val [ i ] > opt-val [ i − 1] 12 then opt-set ← opt-set ∪ { i } 13 i ← Binary-Search ( start [ i ] , finish , n ) − 1 14 else i ← i − 1 15 return opt-set This approach can be used for any dynamic programming algorithm.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend