Dynamic Programming (Chapter 6) Algorithm Design Techniques - - PowerPoint PPT Presentation

dynamic programming chapter 6 algorithm design techniques
SMART_READER_LITE
LIVE PREVIEW

Dynamic Programming (Chapter 6) Algorithm Design Techniques - - PowerPoint PPT Presentation

Dynamic Programming (Chapter 6) Algorithm Design Techniques Greedy Divide and Conquer Dynamic Programming Network Flows Algorithm Design Divide and Dynamic Greedy Conquer Programming Formulate problem ? ? ? Design algorithm less


slide-1
SLIDE 1

Dynamic Programming (Chapter 6)

slide-2
SLIDE 2

Algorithm Design Techniques

Greedy Divide and Conquer Dynamic Programming Network Flows

slide-3
SLIDE 3

Algorithm Design

Greedy Divide and Conquer Dynamic Programming Formulate problem ? ? ? Design algorithm less work more work more work Prove correctness more work less work less work Analyze running time less work more work less work

slide-4
SLIDE 4

Dynamic Programming “Recipe”

Recursive formulation of optimal solution in terms of subproblems Obvious implementation requires solving exponentially many subproblems Careful implementation to solve only polynomially many different subproblems

slide-5
SLIDE 5

Interval Scheduling

(Yes, this is an old problem!)

Job j starts at sj and finishes at fj. Two jobs compatible if they don't overlap. Goal: find maximum subset of mutually compatible jobs.

Time

1 2 3 4 5 6 7 8 9 10 11

f g h e a b c d

slide-6
SLIDE 6

Interval Scheduling: Greedy Solution

Sort jobs by earliest finish time. Take each job provided it's compatible with the ones already taken.

Time

1 2 3 4 5 6 7 8 9 10 11

f g h e a b c d

b, e, h

slide-7
SLIDE 7

Weighted Interval Scheduling

Job j starts at sj, finishes at fj, and has weight vj. Two jobs compatible if they don't overlap. Goal: find maximum weight subset of mutually compatible jobs.

Time

1 2 3 4 5 6 7 8 9 10 11

4 3 1 3 3 2 4 1

slide-8
SLIDE 8

Greedy Solution?

  • Observation. Greedy algorithm can be

arbitrarily bad when intervals are weighted.

Time 1 2 3 4 5 6 7 8 9 10 11 weight = 999 weight = 1

slide-9
SLIDE 9

Weighted Interval Scheduling

Label jobs by finishing time: f1 ≤ f2 ≤ . . . ≤ fn . p(j) = largest index i < j such that job i is compatible with j. E.g.: p(8) = 5, p(7) = 5, p(2) = 0.

Time

1 2 3 4 5 6 7 8 9 10 11

4 3 1 3 3 2 4 1

1 2 3 4 5 6 7 8

slide-10
SLIDE 10

Dynamic Programming: Binary Choice

OPT(j) = value of optimal solution to the problem consisting of job requests 1, 2, ..., j. Case 1: OPT selects job j. Case 2: OPT does not select job j.

slide-11
SLIDE 11

If OPT selects job j...

can't use incompatible jobs { p(j) + 1, p(j) + 2, ..., j - 1 } must include optimal solution to problem consisting of remaining compatible jobs 1, 2, ..., p(j)

Time

1 2 3 4 5 6 7 8 9 10 11

5 2 1 3 3 2 4 1

1 2 3 4 5 6 7 8

p(j) + 1, p(j) + 2, ..., j - 1

slide-12
SLIDE 12

If OPT does not select job j...

must include optimal solution to problem consisting of remaining compatible jobs 1, 2, ..., j-1

Time

1 2 3 4 5 6 7 8 9 10 11

4 3 1 3 3 2 4 1

1 2 3 4 5 6 7 8

slide-13
SLIDE 13

Optimal Substructure

OPT( j) = if j= 0 max v j + OPT( p( j)), OPT( j −1)

{ }

  • therwise

⎧ ⎨ ⎩

OPT(j) = value of optimal solution to the problem consisting of job requests 1, 2, ..., j Case 1: OPT selects job j Case 2: OPT does not select job j Recurrence for OPT(j)

Case 1 Case 2

slide-14
SLIDE 14

Sort jobs by finish time: f1 ≤ f2 ≤ ... ≤ fn. Compute p(1), p(2), …, p(n) Compute-Opt(j) { if (j = 0) return 0 else return max(vj + Compute-Opt(p(j)), Compute-Opt(j-1)) }

Straightforward Recursive Algorithm

Running time?

slide-15
SLIDE 15

Worst Case Running Time

3 4 5 1 2

p(1) = 0, p(j) = j-2

5 4 3 3 2 2 1 2 1 1 1 1

Worst-case is exponential How can we do better?

slide-16
SLIDE 16

Sort jobs by finish times so that f1 ≤ f2 ≤ ... ≤ fn. Compute p(1), p(2), …, p(n) for j = 1 to n M[j] = empty M[0] = 0 M-Compute-Opt(j) { if (M[j] is empty) M[j] = max(wj + M-Compute-Opt(p(j)), M-Compute-Opt(j-1)) return M[j] }

Memoization

  • Memoization. Store results of each sub-problem in

an array; lookup as needed.

slide-17
SLIDE 17

Memoization

Time

1 2 3 4 5 6 7 8 9 10 11

4 3 1 3 3 2 4 1

1 2 3 4 5 6 7 8

M

M[8] = max(1 + M-Compute-Opt(5), M-Compute-Opt(7)

slide-18
SLIDE 18

Memoization

M

M[5] = max(3 + M-Compute-Opt(2), M-Compute-Opt(4)

Time

1 2 3 4 5 6 7 8 9 10 11

4 3 1 3 3 2 4 1

1 2 3 4 5 6 7 8

slide-19
SLIDE 19

Memoization

M

M[2] = max(2 + 0, M-Compute-Opt(1)

Time

1 2 3 4 5 6 7 8 9 10 11

4 3 1 3 3 2 4 1

1 2 3 4 5 6 7 8

slide-20
SLIDE 20

Memoization

3

M

M[1] = max(3 + 0, 0)

Time

1 2 3 4 5 6 7 8 9 10 11

4 3 1 3 3 2 4 1

1 2 3 4 5 6 7 8

slide-21
SLIDE 21

Memoization

3 3

M

M[2] = max(2 + 0, 3)

Time

1 2 3 4 5 6 7 8 9 10 11

4 3 1 3 3 2 4 1

1 2 3 4 5 6 7 8

slide-22
SLIDE 22

Memoization

3 3

M

M[5] = max(3 + 3, M-Compute-Opt(4)

Time

1 2 3 4 5 6 7 8 9 10 11

4 3 1 3 3 2 4 1

1 2 3 4 5 6 7 8

slide-23
SLIDE 23

Memoization

3 3

M

M[4] = max(1 + 3, M-Compute-Opt(3)

Time

1 2 3 4 5 6 7 8 9 10 11

4 3 1 3 3 2 4 1

1 2 3 4 5 6 7 8

slide-24
SLIDE 24

Memoization

3 3 4

M

M[3] = max(4 + 0, 3)

Time

1 2 3 4 5 6 7 8 9 10 11

4 3 1 3 3 2 4 1

1 2 3 4 5 6 7 8

slide-25
SLIDE 25

Memoization

3 3 4 4

M

M[4] = max(1 + 3, 4)

Time

1 2 3 4 5 6 7 8 9 10 11

4 3 1 3 3 2 4 1

1 2 3 4 5 6 7 8

slide-26
SLIDE 26

Memoization

3 3 4 4 6

M

M[5] = max(3 + 3, 4)

Time

1 2 3 4 5 6 7 8 9 10 11

4 3 1 3 3 2 4 1

1 2 3 4 5 6 7 8

slide-27
SLIDE 27

Memoization

3 3 4 4 6

M

M[8] = max(1 + 6, M-Compute-Opt(7)

Time

1 2 3 4 5 6 7 8 9 10 11

4 3 1 3 3 2 4 1

1 2 3 4 5 6 7 8

slide-28
SLIDE 28

Memoization

3 3 4 4 6

M

M[7] = max(3 + 6, M-Compute-Opt(6))

Time

1 2 3 4 5 6 7 8 9 10 11

4 3 1 3 3 2 4 1

1 2 3 4 5 6 7 8

slide-29
SLIDE 29

Memoization

3 3 4 4 6 8

M

M[6] = max(4 + 4, 6)

Time

1 2 3 4 5 6 7 8 9 10 11

4 3 1 3 3 2 4 1

1 2 3 4 5 6 7 8

slide-30
SLIDE 30

Memoization

3 3 4 4 6 8 9

M

M[7] = max(3 + 6, 8)

Time

1 2 3 4 5 6 7 8 9 10 11

4 3 1 3 3 2 4 1

1 2 3 4 5 6 7 8

slide-31
SLIDE 31

Memoization

3 3 4 4 6 8 9 9

M

M[8] = max(1 + 6, 9)

Time

1 2 3 4 5 6 7 8 9 10 11

4 3 1 3 3 2 4 1

1 2 3 4 5 6 7 8

slide-32
SLIDE 32

Memoization

3 3 4 4 6 8 9 9

M

Time

1 2 3 4 5 6 7 8 9 10 11

4 3 1 3 3 2 4 1

1 2 3 4 5 6 7 8

slide-33
SLIDE 33

Sort jobs by finish times so that f1 ≤ f2 ≤ ... ≤ fn. Compute p(1), p(2), …, p(n) for j = 1 to n M[j] = empty M[0] = 0 M-Compute-Opt(j) { if (M[j] is empty) M[j] = max(wj + M-Compute-Opt(p(j)), M-Compute-Opt(j-1)) return M[j] }

Running Time?

slide-34
SLIDE 34

Iterative Solution

Bottom-up dynamic programming. Solve subproblems in ascending order. Sort jobs by finish time: f1 ≤ f2 ≤ ... ≤ fn. Compute p(1), p(2), …, p(n) Iterative-Compute-Opt { M[0] = 0 for j = 1 to n M[j] = max(vj + M[p(j)], M[j-1]) }

slide-35
SLIDE 35

Compute the Solution (Not Just Its Value)

Exercise: suppose you know the value OPT(j) for all j. How can you produce the set of intervals in the optimal solution?

slide-36
SLIDE 36

Dynamic Programming “Recipe”

Recursive formulation of optimal solution in terms of subproblems Obvious implementation requires solving exponentially many subproblems Careful implementation to solve only polynomially many different subproblems