Dynamic Programming Textbook Reading Chapters 15, 24 & 25 - - PowerPoint PPT Presentation

dynamic programming
SMART_READER_LITE
LIVE PREVIEW

Dynamic Programming Textbook Reading Chapters 15, 24 & 25 - - PowerPoint PPT Presentation

Dynamic Programming Textbook Reading Chapters 15, 24 & 25 Overview Design principle Recursively break the problem into smaller subproblems. Avoid repeatedly solving the same subproblems by caching their solutions. Important tool


slide-1
SLIDE 1

Dynamic Programming

Textbook Reading Chapters 15, 24 & 25

slide-2
SLIDE 2

Overview

Important tool

  • Recurrence relations

Design principle

  • Recursively break the problem into smaller subproblems.
  • Avoid repeatedly solving the same subproblems by caching their solutions.

Problems

  • Weighted interval scheduling
  • Sequence alignment
  • Optimal binary search trees
  • Shortest paths
slide-3
SLIDE 3

Weighted Interval Scheduling

Given: A set of activities competing for time intervals on a certain resource (E.g., classes to be scheduled competing for a classroom) Goal: Schedule non-conflicting activities so that the total time the resource is in use is maximized.

slide-4
SLIDE 4
  • W. I. S.: A Naïve Solution
  • Try all possible subsets.
  • Check each subset for conflicts.
  • Out of the non-conflicting ones, remember the one with maximal total length.
slide-5
SLIDE 5
  • W. I. S.: A Naïve Solution
  • Try all possible subsets.
  • Check each subset for conflicts.
  • Out of the non-conflicting ones, remember the one with maximal total length.

Cost:

slide-6
SLIDE 6
  • W. I. S.: A Naïve Solution
  • Try all possible subsets.
  • Check each subset for conflicts.
  • Out of the non-conflicting ones, remember the one with maximal total length.

Cost: O(2n · n2)

slide-7
SLIDE 7
  • W. I. S.: Towards a Beter Solution

General idea:

  • Try to make one choice at a time, just as in a greedy algorithm.
  • In each step, what are the options we can choose from?
  • What can we say about the subproblem we obtain after choosing each option?
slide-8
SLIDE 8
  • W. I. S.: Towards a Beter Solution

General idea:

  • Try to make one choice at a time, just as in a greedy algorithm.
  • In each step, what are the options we can choose from?
  • What can we say about the subproblem we obtain after choosing each option?

What options do we have?

slide-9
SLIDE 9
  • W. I. S.: Towards a Beter Solution

General idea:

  • Try to make one choice at a time, just as in a greedy algorithm.
  • In each step, what are the options we can choose from?
  • What can we say about the subproblem we obtain after choosing each option?

What options do we have? An interval is in the optimal solution or it isn’t.

slide-10
SLIDE 10
  • W. I. S.: Towards a Beter Solution

General idea:

  • Try to make one choice at a time, just as in a greedy algorithm.
  • In each step, what are the options we can choose from?
  • What can we say about the subproblem we obtain after choosing each option?

What options do we have? An interval is in the optimal solution or it isn’t. Towards a recurrence for the cost of an optimal solution:

slide-11
SLIDE 11
  • W. I. S.: Towards a Beter Solution

General idea:

  • Try to make one choice at a time, just as in a greedy algorithm.
  • In each step, what are the options we can choose from?
  • What can we say about the subproblem we obtain after choosing each option?

What options do we have? An interval is in the optimal solution or it isn’t. Towards a recurrence for the cost of an optimal solution: If the maximal-length subset of {I1, I2, . . . , In} does not include In, then it must be a maximal-length subset of {I1, I2, . . . , In–1}.

slide-12
SLIDE 12
  • W. I. S.: Towards a Beter Solution

General idea:

  • Try to make one choice at a time, just as in a greedy algorithm.
  • In each step, what are the options we can choose from?
  • What can we say about the subproblem we obtain after choosing each option?

What options do we have? An interval is in the optimal solution or it isn’t. Towards a recurrence for the cost of an optimal solution: If the maximal-length subset of {I1, I2, . . . , In} does not include In, then it must be a maximal-length subset of {I1, I2, . . . , In–1}. If the maximal-length subset of {I1, I2, . . . , In} includes In, then it must be O ∪ {In}, where O is a maximal-length subset of all intervals in {I1, I2, . . . , In} that do not overlap In.

slide-13
SLIDE 13
  • W. I. S.: Cleaning Up the Model

Number the intervals by increasing ending times: I1 I7 I4 I8 I5 I2 I6 I3 I9

slide-14
SLIDE 14
  • W. I. S.: Cleaning Up the Model

Number the intervals by increasing ending times: I1 I7 I4 I8 I5 I2 I6 I3 I9 For 1 ≤ j ≤ n, let pj = max({0} ∪ {k | 1 ≤ k < j and Ik does not overlap Ij}). j 1 8 6 7 5 2 3 4 9 pj 1 3 3 5 5 7

slide-15
SLIDE 15
  • W. I. S.: Cleaning Up the Model

Number the intervals by increasing ending times: I1 I7 I4 I8 I5 I2 I6 I3 I9 For 1 ≤ j ≤ n, let pj = max({0} ∪ {k | 1 ≤ k < j and Ik does not overlap Ij}). j 1 8 6 7 5 2 3 4 9 pj 1 3 3 5 5 7 If the maximal-length subset of {I1, I2, . . . , In} includes In, then it is Opn, where Opn is a maximal-length subset of the intervals {I1, I2, . . . , Ipn}.

slide-16
SLIDE 16
  • W. I. S.: A Recurrence for the Optimal Solution

Let |Ij| be the length of interval Ij.

slide-17
SLIDE 17
  • W. I. S.: A Recurrence for the Optimal Solution

Let |Ij| be the length of interval Ij. Let ℓ(j) be maximal total length of any subset of non-overlapping intervals in {I1, I2, . . . , Ij}.

slide-18
SLIDE 18
  • W. I. S.: A Recurrence for the Optimal Solution

Let |Ij| be the length of interval Ij. Let ℓ(j) be maximal total length of any subset of non-overlapping intervals in {I1, I2, . . . , Ij}. What we’re interested in is ℓ(n)!

slide-19
SLIDE 19
  • W. I. S.: A Recurrence for the Optimal Solution

ℓ(j) =

  • j = 0

max(ℓ(j – 1), |Ij| + ℓ(pj)) j > 0 Let |Ij| be the length of interval Ij. Let ℓ(j) be maximal total length of any subset of non-overlapping intervals in {I1, I2, . . . , Ij}. What we’re interested in is ℓ(n)!

slide-20
SLIDE 20
  • W. I. S.: A Recursive Algorithm

FindScheduleLength(I, p, j)

1

if j = 0

2

then return 0

3

else return max(FindScheduleLength(I, p, p[j]) + |I[j]|, FindScheduleLength(I, p, j – 1))

slide-21
SLIDE 21
  • W. I. S.: A Recursive Algorithm

FindScheduleLength(I, p, j)

1

if j = 0

2

then return 0

3

else return max(FindScheduleLength(I, p, p[j]) + |I[j]|, FindScheduleLength(I, p, j – 1)) Running time:

slide-22
SLIDE 22
  • W. I. S.: A Recursive Algorithm

FindScheduleLength(I, p, j)

1

if j = 0

2

then return 0

3

else return max(FindScheduleLength(I, p, p[j]) + |I[j]|, FindScheduleLength(I, p, j – 1)) Running time: O(2n)

slide-23
SLIDE 23
  • W. I. S.: A Recursive Algorithm

FindScheduleLength(I, p, j)

1

if j = 0

2

then return 0

3

else return max(FindScheduleLength(I, p, p[j]) + |I[j]|, FindScheduleLength(I, p, j – 1)) Running time: O(2n) ℓ(5) ℓ(1) ℓ(3) ℓ(2) ℓ(2) ℓ(1) ℓ(1) ℓ(1) ℓ(2) ℓ(1) ℓ(3) ℓ(4)

slide-24
SLIDE 24
  • W. I. S.: A Recursive Algorithm

FindScheduleLength(I, p, j)

1

if j = 0

2

then return 0

3

else return max(FindScheduleLength(I, p, p[j]) + |I[j]|, FindScheduleLength(I, p, j – 1)) Running time: O(2n) ℓ(5) ℓ(1) ℓ(3) ℓ(2) ℓ(2) ℓ(1) ℓ(1) ℓ(1) ℓ(2) ℓ(1) ℓ(3) ℓ(4) The recursive algorithm computes many values repeatedly.

slide-25
SLIDE 25
  • W. I. S.: A Recursive Algorithm

FindScheduleLength(I, p, j)

1

if j = 0

2

then return 0

3

else return max(FindScheduleLength(I, p, p[j]) + |I[j]|, FindScheduleLength(I, p, j – 1)) Running time: O(2n) ℓ(5) ℓ(1) ℓ(3) ℓ(2) ℓ(2) ℓ(1) ℓ(1) ℓ(1) ℓ(2) ℓ(1) ℓ(3) ℓ(4) The recursive algorithm computes many values repeatedly. There are only n values to compute!

slide-26
SLIDE 26
  • W. I. S.: Memoizing the Recursive Algorithm

Memoization: Store already computed values in a table to avoid recomputing them.

slide-27
SLIDE 27
  • W. I. S.: Memoizing the Recursive Algorithm

Memoization: Store already computed values in a table to avoid recomputing them. Here, initialize a table ℓ where ℓ[j] is the length of the optimal schedule for {I1, I2, . . . , Ij}. Initially, ℓ[j] = –∞ for all j.

slide-28
SLIDE 28
  • W. I. S.: Memoizing the Recursive Algorithm

Memoization: Store already computed values in a table to avoid recomputing them. Here, initialize a table ℓ where ℓ[j] is the length of the optimal schedule for {I1, I2, . . . , Ij}. Initially, ℓ[j] = –∞ for all j. FindScheduleLength(I, ℓ, p, j)

1

if j = 0

2

then return 0

3

else if ℓ[j] < 0

4

then ℓ[j] = max(FindScheduleLength(I, p, p[j]) + |I[j]|, FindScheduleLength(I, p, j – 1))

5

return ℓ[j]

slide-29
SLIDE 29
  • W. I. S.: Memoizing the Recursive Algorithm

Memoization: Store already computed values in a table to avoid recomputing them. Here, initialize a table ℓ where ℓ[j] is the length of the optimal schedule for {I1, I2, . . . , Ij}. Initially, ℓ[j] = –∞ for all j. FindScheduleLength(I, ℓ, p, j)

1

if j = 0

2

then return 0

3

else if ℓ[j] < 0

4

then ℓ[j] = max(FindScheduleLength(I, p, p[j]) + |I[j]|, FindScheduleLength(I, p, j – 1))

5

return ℓ[j] Running time: O(n)

slide-30
SLIDE 30
  • W. I. S.: Iterative Table Fill-In

FindScheduleLength(I, p)

1

ℓ[0] = 0

2

for j = 1 to n

3

do ℓ[j] = max(ℓ[j – 1], ℓ[p[j]] + |I[j]|)

4

return ℓ[n]

slide-31
SLIDE 31
  • W. I. S.: Iterative Table Fill-In

FindScheduleLength(I, p)

1

ℓ[0] = 0

2

for j = 1 to n

3

do ℓ[j] = max(ℓ[j – 1], ℓ[p[j]] + |I[j]|)

4

return ℓ[n] Running time: O(n)

slide-32
SLIDE 32
  • W. I. S.: Iterative Table Fill-In

FindScheduleLength(I, p)

1

ℓ[0] = 0

2

for j = 1 to n

3

do ℓ[j] = max(ℓ[j – 1], ℓ[p[j]] + |I[j]|)

4

return ℓ[n] Running time: O(n) Advantage over memoization:

  • No need for recursion.
  • Algorithm is often simpler.
slide-33
SLIDE 33
  • W. I. S.: Iterative Table Fill-In

FindScheduleLength(I, p)

1

ℓ[0] = 0

2

for j = 1 to n

3

do ℓ[j] = max(ℓ[j – 1], ℓ[p[j]] + |I[j]|)

4

return ℓ[n] Running time: O(n) Advantage over memoization:

  • No need for recursion.
  • Algorithm is often simpler.

Disadvantage over memoization:

  • Need to worry about the order in which the table entries are computed:
  • All entries needed to compute the current entry need to be computed first.
  • Memoization computes table entries as needed.
slide-34
SLIDE 34
  • W. I. S.: Computing the Set of Intervals

FindSchedule(I, p)

1

ℓ[0] = 0

2

S[0] = [ ]

3

for j = 1 to n

4

do if ℓ[j – 1] > ℓ[p[j]] + |I[j]|

5

then ℓ[j] = ℓ[j – 1]

6

S[j] = S[j – 1]

7

else ℓ[j] = ℓ[p[j]] + |I[j]|

8

S[j] = [I[j]] ++ S[p[j]]

9

return S[n]

slide-35
SLIDE 35
  • W. I. S.: Computing the Set of Intervals

FindSchedule(I, p)

1

ℓ[0] = 0

2

S[0] = [ ]

3

for j = 1 to n

4

do if ℓ[j – 1] > ℓ[p[j]] + |I[j]|

5

then ℓ[j] = ℓ[j – 1]

6

S[j] = S[j – 1]

7

else ℓ[j] = ℓ[p[j]] + |I[j]|

8

S[j] = [I[j]] ++ S[p[j]]

9

return S[n] Running time: O(n)

slide-36
SLIDE 36
  • W. I. S.: Computing the Set of Intervals

FindSchedule(I, p)

1

ℓ[0] = 0

2

S[0] = [ ]

3

for j = 1 to n

4

do if ℓ[j – 1] > ℓ[p[j]] + |I[j]|

5

then ℓ[j] = ℓ[j – 1]

6

S[j] = S[j – 1]

7

else ℓ[j] = ℓ[p[j]] + |I[j]|

8

S[j] = [I[j]] ++ S[p[j]]

9

return S[n] This computes the sequence of intervals ordered from last to first. This list is of course easy to reverse in linear time. Running time: O(n)

slide-37
SLIDE 37
  • W. I. S.: The Missing Details

What’s missing?

  • Sort the intervals by their ending times.
  • Compute the predecessor array p.
slide-38
SLIDE 38
  • W. I. S.: The Missing Details

What’s missing?

  • Sort the intervals by their ending times.
  • Compute the predecessor array p.

Solution:

  • Sorting is easily done in O(n lg n) time.
  • To compute p[j], perform binary search with I[j]’s starting time on the sorted array
  • f ending times.
slide-39
SLIDE 39
  • W. I. S.: The Missing Details

What’s missing?

  • Sort the intervals by their ending times.
  • Compute the predecessor array p.

Solution:

  • Sorting is easily done in O(n lg n) time.
  • To compute p[j], perform binary search with I[j]’s starting time on the sorted array
  • f ending times.

Theorem: The weighted interval scheduling problem can be solved in O(n lg n) time.

slide-40
SLIDE 40

The Dynamic Programming Technique

The technique:

  • Develop a recurrence expressing the optimal solution for a given problem instance

in terms of optimal solutions for smaller problem instances:

  • Evaluate this recurrence
  • Recursively using memoization or
  • Using iterative table fill-in.
slide-41
SLIDE 41

The Dynamic Programming Technique

The technique:

  • Develop a recurrence expressing the optimal solution for a given problem instance

in terms of optimal solutions for smaller problem instances:

  • Evaluate this recurrence
  • Recursively using memoization or
  • Using iterative table fill-in.

For this to work, the problem must exhibit the optimal substructure property: The

  • ptimal solution to a problem instance must be composed of optimal solutions to

smaller problem instances.

slide-42
SLIDE 42

The Dynamic Programming Technique

The technique:

  • Develop a recurrence expressing the optimal solution for a given problem instance

in terms of optimal solutions for smaller problem instances:

  • Evaluate this recurrence
  • Recursively using memoization or
  • Using iterative table fill-in.

For this to work, the problem must exhibit the optimal substructure property: The

  • ptimal solution to a problem instance must be composed of optimal solutions to

smaller problem instances. A speed-up over the naïve recursive algorithm is achieved if the problem exhibits

  • verlapping subproblems: The same subproblem occurs over and over again in the

recursive evaluation of the recurrence.

slide-43
SLIDE 43

Developing a Dynamic Programming Algorithm

Step 1: Think top-down:

  • Consider an optimal solution (without worrying about how to compute it).
  • Identify how the optimal solution of any problem instance decomposes into optimal

solutions to smaller problem instances.

  • Write down a recurrence based on this analysis.

Step 2: Formulate the algorithm, which computes the solution botom-up:

  • Since an optimal solution depends on optimal solutions to smaller problem

instances, we need to compute those first.

slide-44
SLIDE 44

Sequence Alignment

Given the search term “Dalhusy Computer Science”, Google suggests the correction “Dalhousie Computer Science”.

slide-45
SLIDE 45

Sequence Alignment

Given the search term “Dalhusy Computer Science”, Google suggests the correction “Dalhousie Computer Science”. Can Google read your mind?

slide-46
SLIDE 46

Sequence Alignment

Given the search term “Dalhusy Computer Science”, Google suggests the correction “Dalhousie Computer Science”. They use a clever algorithm to match your mistyped query against the phrases they have in their database. “Dalhousie” is the closest match to “Dalhusy” they find. Can Google read your mind? No!

slide-47
SLIDE 47

Sequence Alignment

Given the search term “Dalhusy Computer Science”, Google suggests the correction “Dalhousie Computer Science”. What’s a good similarity criterion? They use a clever algorithm to match your mistyped query against the phrases they have in their database. “Dalhousie” is the closest match to “Dalhusy” they find. Can Google read your mind? No!

slide-48
SLIDE 48

Sequence Alignment

Problem: Given two strings X = x1x2 · · · xm and Y = y1y2 · · · yn, extend them to two strings X′ = x′

1x′ 2 · · · x′ t and Y′ = y′ 1y′ 2 · · · y′ t of the same length by inserting gaps so

that the following dissimilarity measure D(X′, Y′) is minimized: D(X′, Y′) =

t

  • i=1

d(x′

i , y′ i )

d(x, y) =

  • δ

x =

  • r y =

(gap penalty) µx,y

  • therwise (mismatch penalty)
slide-49
SLIDE 49

Sequence Alignment

Example: Dalh␣usy␣ Dalhousie D(X′, Y′) = 2δ + µiy Problem: Given two strings X = x1x2 · · · xm and Y = y1y2 · · · yn, extend them to two strings X′ = x′

1x′ 2 · · · x′ t and Y′ = y′ 1y′ 2 · · · y′ t of the same length by inserting gaps so

that the following dissimilarity measure D(X′, Y′) is minimized: D(X′, Y′) =

t

  • i=1

d(x′

i , y′ i )

d(x, y) =

  • δ

x =

  • r y =

(gap penalty) µx,y

  • therwise (mismatch penalty)
slide-50
SLIDE 50

Sequence Alignment

Another (more important?) application: DNA sequence alignment to measure the similarity between different DNA samples. Example: Dalh␣usy␣ Dalhousie D(X′, Y′) = 2δ + µiy Problem: Given two strings X = x1x2 · · · xm and Y = y1y2 · · · yn, extend them to two strings X′ = x′

1x′ 2 · · · x′ t and Y′ = y′ 1y′ 2 · · · y′ t of the same length by inserting gaps so

that the following dissimilarity measure D(X′, Y′) is minimized: D(X′, Y′) =

t

  • i=1

d(x′

i , y′ i )

d(x, y) =

  • δ

x =

  • r y =

(gap penalty) µx,y

  • therwise (mismatch penalty)
slide-51
SLIDE 51

Sequence Alignment: Problem Analysis

Assume (x′

1x′ 2 · · · x′ t, y′ 1y′ 2 · · · , y′ t) is an optimal alignment for (x1x2 · · · xm, y1y2 · · · yn).

What choices do we have for the final pair (x′

t, y′ t)?

slide-52
SLIDE 52

Sequence Alignment: Problem Analysis

Assume (x′

1x′ 2 · · · x′ t, y′ 1y′ 2 · · · , y′ t) is an optimal alignment for (x1x2 · · · xm, y1y2 · · · yn).

What choices do we have for the final pair (x′

t, y′ t)?

  • x′

t = xm and y′ t = yn

slide-53
SLIDE 53

Sequence Alignment: Problem Analysis

  • x′

t = xm and y′ t =

Assume (x′

1x′ 2 · · · x′ t, y′ 1y′ 2 · · · , y′ t) is an optimal alignment for (x1x2 · · · xm, y1y2 · · · yn).

What choices do we have for the final pair (x′

t, y′ t)?

  • x′

t = xm and y′ t = yn

slide-54
SLIDE 54

Sequence Alignment: Problem Analysis

  • x′

t = xm and y′ t =

  • x′

t =

and y′

t = yn

Assume (x′

1x′ 2 · · · x′ t, y′ 1y′ 2 · · · , y′ t) is an optimal alignment for (x1x2 · · · xm, y1y2 · · · yn).

What choices do we have for the final pair (x′

t, y′ t)?

  • x′

t = xm and y′ t = yn

slide-55
SLIDE 55

Sequence Alignment: Problem Analysis

Assume (x′

1x′ 2 · · · x′ t, y′ 1y′ 2 · · · , y′ t) is an optimal alignment for (x1x2 · · · xm, y1y2 · · · yn).

What choices do we have for the final pair (x′

t, y′ t)?

  • x′

t = xm and y′ t = yn

(x′

1x′ 2 · · · x′ t–1, y′ 1y′ 2 · · · y′ t–1) must be an optimal alignment for

(x1x2 · · · xm–1, y1y2 · · · yn–1).

slide-56
SLIDE 56

Sequence Alignment: Problem Analysis

Assume there’s a beter alignment (x′′

1 x′′ 2 · · · x′′ s , y′′ 1 y′′ 2 · · · y′′ s ) with dissimilarity s

  • i=1

d(x′′

i , y′′ i ) < t–1

  • i=1

d(x′

i , y′ i ).

Assume (x′

1x′ 2 · · · x′ t, y′ 1y′ 2 · · · , y′ t) is an optimal alignment for (x1x2 · · · xm, y1y2 · · · yn).

What choices do we have for the final pair (x′

t, y′ t)?

  • x′

t = xm and y′ t = yn

(x′

1x′ 2 · · · x′ t–1, y′ 1y′ 2 · · · y′ t–1) must be an optimal alignment for

(x1x2 · · · xm–1, y1y2 · · · yn–1).

slide-57
SLIDE 57

Sequence Alignment: Problem Analysis

Assume there’s a beter alignment (x′′

1 x′′ 2 · · · x′′ s , y′′ 1 y′′ 2 · · · y′′ s ) with dissimilarity s

  • i=1

d(x′′

i , y′′ i ) < t–1

  • i=1

d(x′

i , y′ i ).

Then (x′′

1 x′′ 2 · · · x′′ s x′ t, y′′ 1 y′′ 2 · · · y′′ s y′ t) is an aligment for (x1x2 · · · xm, y1y2 · · · yn) with

dissimilarity

s

  • i=1

d(x′′

i , y′′ i ) + d(x′ t, y′ t) < t–1

  • i=1

d(x′

i , y′ i ) + d(x′ t, y′ t) = t

  • i=1

d(x′

i , y′ i ),

a contradiction. Assume (x′

1x′ 2 · · · x′ t, y′ 1y′ 2 · · · , y′ t) is an optimal alignment for (x1x2 · · · xm, y1y2 · · · yn).

What choices do we have for the final pair (x′

t, y′ t)?

  • x′

t = xm and y′ t = yn

(x′

1x′ 2 · · · x′ t–1, y′ 1y′ 2 · · · y′ t–1) must be an optimal alignment for

(x1x2 · · · xm–1, y1y2 · · · yn–1).

slide-58
SLIDE 58

Sequence Alignment: Problem Analysis

  • x′

t = xm and y′ t =

(x′

1x′ 2 · · · x′ t–1, y′ 1y′ 2 · · · y′ t–1) must be an optimal alignment for

(x1x2 · · · xm–1, y1y2 · · · yn). Assume (x′

1x′ 2 · · · x′ t, y′ 1y′ 2 · · · , y′ t) is an optimal alignment for (x1x2 · · · xm, y1y2 · · · yn).

What choices do we have for the final pair (x′

t, y′ t)?

  • x′

t = xm and y′ t = yn

(x′

1x′ 2 · · · x′ t–1, y′ 1y′ 2 · · · y′ t–1) must be an optimal alignment for

(x1x2 · · · xm–1, y1y2 · · · yn–1).

slide-59
SLIDE 59

Sequence Alignment: Problem Analysis

  • x′

t = xm and y′ t =

(x′

1x′ 2 · · · x′ t–1, y′ 1y′ 2 · · · y′ t–1) must be an optimal alignment for

(x1x2 · · · xm–1, y1y2 · · · yn).

  • x′

t =

and y′

t = yn

(x′

1x′ 2 · · · x′ t–1, y′ 1y′ 2 · · · y′ t–1) must be an optimal alignment for

(x1x2 · · · xm, y1y2 · · · yn–1). Assume (x′

1x′ 2 · · · x′ t, y′ 1y′ 2 · · · , y′ t) is an optimal alignment for (x1x2 · · · xm, y1y2 · · · yn).

What choices do we have for the final pair (x′

t, y′ t)?

  • x′

t = xm and y′ t = yn

(x′

1x′ 2 · · · x′ t–1, y′ 1y′ 2 · · · y′ t–1) must be an optimal alignment for

(x1x2 · · · xm–1, y1y2 · · · yn–1).

slide-60
SLIDE 60

Sequence Alignment: The Recurrence

Let D(i, j) be the dissimilarity of the strings x1x2 · · · xi and y1y2 · · · yj.

slide-61
SLIDE 61

Sequence Alignment: The Recurrence

Let D(i, j) be the dissimilarity of the strings x1x2 · · · xi and y1y2 · · · yj. We are interested in D(m, n).

slide-62
SLIDE 62

Sequence Alignment: The Recurrence

Recurrence: D(i, j) =      δ · j i = 0 δ · i j = 0 min(D(i – 1, j – 1) + µxi,yj, D(i, j – 1) + δ, D(i – 1, j) + δ)

  • therwise

Let D(i, j) be the dissimilarity of the strings x1x2 · · · xi and y1y2 · · · yj. We are interested in D(m, n).

slide-63
SLIDE 63

Sequence Alignment: The Algorithm

SequenceAlignment(X, Y, µ, δ)

1

D[0, 0] = 0

2

A[0, 0] = [ ]

3

for i = 1 to m

4

do D[i, 0] = D[i – 1, 0] + δ

5

A[i, 0] = [(X[i], )] ++ A[i – 1, 0]

6

for j = 1 to n

7

do D[0, j] = D[0, j – 1] + δ

8

A[0, j] = [( , Y[j])] ++ A[0, j – 1]

9

for i = 1 to m

10

do for j = 1 to n

11

do D[i, j] = D[i – 1, j – 1] + µ[X[i], Y[j]]

12

A[i, j] = [(X[i], Y[j])] ++ A[i – 1, j – 1]

13

if D[i, j] > D[i – 1, j] + δ

14

then D[i, j] = D[i – 1, j] + δ

15

A[i, j] = [(X[i], )] ++ A[i – 1, j]

16

if D[i, j] > D[i, j – 1] + δ

17

then D[i, j] = D[i, j – 1] + δ

18

A[i, j] = [( , Y[j])] ++ A[i, j – 1]

19

return A[m, n]

slide-64
SLIDE 64

Sequence Alignment: The Algorithm

SequenceAlignment(X, Y, µ, δ)

1

D[0, 0] = 0

2

A[0, 0] = [ ]

3

for i = 1 to m

4

do D[i, 0] = D[i – 1, 0] + δ

5

A[i, 0] = [(X[i], )] ++ A[i – 1, 0]

6

for j = 1 to n

7

do D[0, j] = D[0, j – 1] + δ

8

A[0, j] = [( , Y[j])] ++ A[0, j – 1]

9

for i = 1 to m

10

do for j = 1 to n

11

do D[i, j] = D[i – 1, j – 1] + µ[X[i], Y[j]]

12

A[i, j] = [(X[i], Y[j])] ++ A[i – 1, j – 1]

13

if D[i, j] > D[i – 1, j] + δ

14

then D[i, j] = D[i – 1, j] + δ

15

A[i, j] = [(X[i], )] ++ A[i – 1, j]

16

if D[i, j] > D[i, j – 1] + δ

17

then D[i, j] = D[i, j – 1] + δ

18

A[i, j] = [( , Y[j])] ++ A[i, j – 1]

19

return A[m, n]

Running time: O(mn)

slide-65
SLIDE 65

Sequence Alignment: The Algorithm

SequenceAlignment(X, Y, µ, δ)

1

D[0, 0] = 0

2

A[0, 0] = [ ]

3

for i = 1 to m

4

do D[i, 0] = D[i – 1, 0] + δ

5

A[i, 0] = [(X[i], )] ++ A[i – 1, 0]

6

for j = 1 to n

7

do D[0, j] = D[0, j – 1] + δ

8

A[0, j] = [( , Y[j])] ++ A[0, j – 1]

9

for i = 1 to m

10

do for j = 1 to n

11

do D[i, j] = D[i – 1, j – 1] + µ[X[i], Y[j]]

12

A[i, j] = [(X[i], Y[j])] ++ A[i – 1, j – 1]

13

if D[i, j] > D[i – 1, j] + δ

14

then D[i, j] = D[i – 1, j] + δ

15

A[i, j] = [(X[i], )] ++ A[i – 1, j]

16

if D[i, j] > D[i, j – 1] + δ

17

then D[i, j] = D[i, j – 1] + δ

18

A[i, j] = [( , Y[j])] ++ A[i, j – 1]

19

return A[m, n]

Running time: O(mn) Again, the sequence alignment is reported back-to-front and can be reversed in O(m + n) time.

slide-66
SLIDE 66

Optimal Binary Search Trees

Balanced binary search trees (red-black trees, AVL trees, . . . ) guarantee O(lg n) time to find an element.

slide-67
SLIDE 67

Optimal Binary Search Trees

Balanced binary search trees (red-black trees, AVL trees, . . . ) guarantee O(lg n) time to find an element. Can we do beter?

slide-68
SLIDE 68

Optimal Binary Search Trees

Balanced binary search trees (red-black trees, AVL trees, . . . ) guarantee O(lg n) time to find an element. Can we do beter? Not in the worst case.

slide-69
SLIDE 69

Optimal Binary Search Trees

Balanced binary search trees (red-black trees, AVL trees, . . . ) guarantee O(lg n) time to find an element. Let x1 < x2 < · · · < xn be the elements to be stored in the tree. x13 x12 x11 x10 x9 x8 x7 x6 x5 x4 x3 x2 x1 Can we do beter? Not in the worst case.

slide-70
SLIDE 70

Optimal Binary Search Trees

Balanced binary search trees (red-black trees, AVL trees, . . . ) guarantee O(lg n) time to find an element. Let x1 < x2 < · · · < xn be the elements to be stored in the tree. Let P = {p1, p2, . . . , pn} be the probabilities of searching for these elements. x13 x12 x11 x10 x9 x8 x7 x6 x5 x4 x3 x2 x1 Can we do beter? Not in the worst case.

slide-71
SLIDE 71

Optimal Binary Search Trees

Balanced binary search trees (red-black trees, AVL trees, . . . ) guarantee O(lg n) time to find an element. Let x1 < x2 < · · · < xn be the elements to be stored in the tree. Let P = {p1, p2, . . . , pn} be the probabilities of searching for these elements. For a binary search tree T, let dT(xi) denote the depth of element xi in T. x13 x12 x11 x10 x9 x8 x7 x6 x5 x4 x3 x2 x1 Can we do beter? Not in the worst case.

slide-72
SLIDE 72

Optimal Binary Search Trees

Balanced binary search trees (red-black trees, AVL trees, . . . ) guarantee O(lg n) time to find an element. Let x1 < x2 < · · · < xn be the elements to be stored in the tree. Let P = {p1, p2, . . . , pn} be the probabilities of searching for these elements. For a binary search tree T, let dT(xi) denote the depth of element xi in T. The cost of searching for element xi is in O(dT(xi)). x13 x12 x11 x10 x9 x8 x7 x6 x5 x4 x3 x2 x1 Can we do beter? Not in the worst case.

slide-73
SLIDE 73

Optimal Binary Search Trees

Balanced binary search trees (red-black trees, AVL trees, . . . ) guarantee O(lg n) time to find an element. Let x1 < x2 < · · · < xn be the elements to be stored in the tree. Let P = {p1, p2, . . . , pn} be the probabilities of searching for these elements. For a binary search tree T, let dT(xi) denote the depth of element xi in T. The cost of searching for element xi is in O(dT(xi)). The expected cost of a random query is in O(CP(T)), where CP(T) =

n

  • i=1

pidT(xi). x13 x12 x11 x10 x9 x8 x7 x6 x5 x4 x3 x2 x1 Can we do beter? Not in the worst case.

slide-74
SLIDE 74

Optimal Binary Search Trees

Balanced binary search trees (red-black trees, AVL trees, . . . ) guarantee O(lg n) time to find an element. Let x1 < x2 < · · · < xn be the elements to be stored in the tree. Let P = {p1, p2, . . . , pn} be the probabilities of searching for these elements. For a binary search tree T, let dT(xi) denote the depth of element xi in T. The cost of searching for element xi is in O(dT(xi)). The expected cost of a random query is in O(CP(T)), where CP(T) =

n

  • i=1

pidT(xi). An optimal binary search tree is a binary search tree T that minimizes CP(T). x13 x12 x11 x10 x9 x8 x7 x6 x5 x4 x3 x2 x1 Can we do beter? Not in the worst case.

slide-75
SLIDE 75

Balancing Is Not Necessarily Optimal

Assume n = 2k – 1 and pi = 2–i for all 1 ≤ i ≤ n – 1 and pn = 2–n+1.

slide-76
SLIDE 76

Balancing Is Not Necessarily Optimal

Assume n = 2k – 1 and pi = 2–i for all 1 ≤ i ≤ n – 1 and pn = 2–n+1. Balanced tree:

slide-77
SLIDE 77

Balancing Is Not Necessarily Optimal

Assume n = 2k – 1 and pi = 2–i for all 1 ≤ i ≤ n – 1 and pn = 2–n+1. Balanced tree: x1 is at depth lg n. ⇒ Expected cost ≥ lg n 2 .

slide-78
SLIDE 78

Balancing Is Not Necessarily Optimal

Assume n = 2k – 1 and pi = 2–i for all 1 ≤ i ≤ n – 1 and pn = 2–n+1. Balanced tree: x1 is at depth lg n. ⇒ Expected cost ≥ lg n 2 . Long path:

slide-79
SLIDE 79

Balancing Is Not Necessarily Optimal

Assume n = 2k – 1 and pi = 2–i for all 1 ≤ i ≤ n – 1 and pn = 2–n+1. Balanced tree: x1 is at depth lg n. ⇒ Expected cost ≥ lg n 2 . Long path: Depth of xi is i.

slide-80
SLIDE 80

Balancing Is Not Necessarily Optimal

Assume n = 2k – 1 and pi = 2–i for all 1 ≤ i ≤ n – 1 and pn = 2–n+1. Balanced tree: x1 is at depth lg n. ⇒ Expected cost ≥ lg n 2 . Long path: Depth of xi is i. ⇒ Expected cost =

n

  • i=1

i 2i + n 2n <

  • i=1

i 2i + n 2n = 1/2 (1 – 1/2)2 + n 2n = 2 + n 2n < 3

slide-81
SLIDE 81

Optimal Binary Search Trees: Problem Analysis

The structure of a binary search tree: Assume we want to store elements xℓ, xℓ+1, . . . , xr.

slide-82
SLIDE 82

Optimal Binary Search Trees: Problem Analysis

The structure of a binary search tree: Assume we want to store elements xℓ, xℓ+1, . . . , xr. xm xm+1, xm+2, . . . , xr xℓ, xℓ+1, . . . , xm–1 Tℓ Tr

slide-83
SLIDE 83

Optimal Binary Search Trees: Problem Analysis

Let pi,j = j

h=i ph.

CP(T) = pℓ,r + CP(Tℓ) + CP(Tr) The structure of a binary search tree: Assume we want to store elements xℓ, xℓ+1, . . . , xr. xm xm+1, xm+2, . . . , xr xℓ, xℓ+1, . . . , xm–1 Tℓ Tr

slide-84
SLIDE 84

Optimal Binary Search Trees: Problem Analysis

Let pi,j = j

h=i ph.

CP(T) = pℓ,r + CP(Tℓ) + CP(Tr) ⇒ Tℓ and Tr are optimal search trees for xℓ, xℓ+1, . . . , xm–1 and xm+1, xm+2, . . . , xr, respectively. The structure of a binary search tree: Assume we want to store elements xℓ, xℓ+1, . . . , xr. xm xm+1, xm+2, . . . , xr xℓ, xℓ+1, . . . , xm–1 Tℓ Tr

slide-85
SLIDE 85

Optimal Binary Search Trees: Problem Analysis

Let pi,j = j

h=i ph.

CP(T) = pℓ,r + CP(Tℓ) + CP(Tr) ⇒ Tℓ and Tr are optimal search trees for xℓ, xℓ+1, . . . , xm–1 and xm+1, xm+2, . . . , xr, respectively. We need to figure out which element to store at the root! The structure of a binary search tree: Assume we want to store elements xℓ, xℓ+1, . . . , xr. xm xm+1, xm+2, . . . , xr xℓ, xℓ+1, . . . , xm–1 Tℓ Tr

slide-86
SLIDE 86

Optimal Binary Search Trees: The Recurrence

Let C(ℓ, r) be the cost of an optimal binary search tree for xℓ, xℓ+1, . . . , xr. We are interested in C(1, n).

slide-87
SLIDE 87

Optimal Binary Search Trees: The Recurrence

C(ℓ, r) =

  • r < ℓ

pℓ,r + minℓ≤m≤r(Cℓ,m–1 + Cm+1,r)

  • therwise

Let C(ℓ, r) be the cost of an optimal binary search tree for xℓ, xℓ+1, . . . , xr. We are interested in C(1, n).

slide-88
SLIDE 88

Optimal Binary Search Trees: The Algorithm

OptimalBinarySearchTree(X, P)

1

for i = 1 to n

2

do P′[i, i] = P[i]

3

for j = i + 1 to n

4

do P′[i, j] = P′[i, j – 1] + P[j]

5

for i = 1 to n + 1

6

do C[i, i – 1] = 0

7

T[i, i – 1] = ∅

8

for ℓ = 0 to n – 1

9

do for i = 1 to n – ℓ

10

do C[i, i + ℓ] = ∞

11

for j = i to i + ℓ

12

do if C[i, i + ℓ] > C[i, j – 1] + C[j + 1, i + ℓ]

13

then C[i, i + ℓ] = C[i, j – 1] + C[j + 1, i + ℓ]

14

T[i, i + ℓ] = new node storing X[j]

15

T[i, i + ℓ].left = T[i, j – 1]

16

T[i, i + ℓ].right = T[j + 1, i + ℓ]

17

C[i, i + ℓ] = C[i, i + ℓ] + P′[i, i + ℓ]

18

return T[1, n]

slide-89
SLIDE 89

Optimal Binary Search Trees: The Algorithm

OptimalBinarySearchTree(X, P)

1

for i = 1 to n

2

do P′[i, i] = P[i]

3

for j = i + 1 to n

4

do P′[i, j] = P′[i, j – 1] + P[j]

5

for i = 1 to n + 1

6

do C[i, i – 1] = 0

7

T[i, i – 1] = ∅

8

for ℓ = 0 to n – 1

9

do for i = 1 to n – ℓ

10

do C[i, i + ℓ] = ∞

11

for j = i to i + ℓ

12

do if C[i, i + ℓ] > C[i, j – 1] + C[j + 1, i + ℓ]

13

then C[i, i + ℓ] = C[i, j – 1] + C[j + 1, i + ℓ]

14

T[i, i + ℓ] = new node storing X[j]

15

T[i, i + ℓ].left = T[i, j – 1]

16

T[i, i + ℓ].right = T[j + 1, i + ℓ]

17

C[i, i + ℓ] = C[i, i + ℓ] + P′[i, i + ℓ]

18

return T[1, n]

Lemma: An optimal binary search tree for n elements can be computed in O(n3) time.

slide-90
SLIDE 90

Single-Source Shortest Paths

Dijkstra’s algorithm may fail in the presence of negative-weight edges: Dijkstra 6 2 7 2 4 7 –3 Correct 4 2 7 2 4 7 –3

slide-91
SLIDE 91

Single-Source Shortest Paths

Dijkstra’s algorithm may fail in the presence of negative-weight edges: Dijkstra 6 2 7 2 4 7 –3 Correct 4 2 7 2 4 7 –3 We need an algorithm that can deal with negative-length edges.

slide-92
SLIDE 92

Single-Source Shortest Paths: Problem Analysis

Lemma: If P = u0, v1, . . . , uk is a shortest path from u0 = s to uk = v, then P′ = (u0, u1, . . . , uk–1) is a shortest path from u0 to uk–1. P s = u0 uk–1 v = uk

slide-93
SLIDE 93

Single-Source Shortest Paths: Problem Analysis

Lemma: If P = u0, v1, . . . , uk is a shortest path from u0 = s to uk = v, then P′ = (u0, u1, . . . , uk–1) is a shortest path from u0 to uk–1. P Shortest path from u0 to uk–1 s = u0 uk–1 v = uk

slide-94
SLIDE 94

Single-Source Shortest Paths: Problem Analysis

Lemma: If P = u0, v1, . . . , uk is a shortest path from u0 = s to uk = v, then P′ = (u0, u1, . . . , uk–1) is a shortest path from u0 to uk–1. Observation: P′ has one less edge than P. P Shortest path from u0 to uk–1 s = u0 uk–1 v = uk

slide-95
SLIDE 95

Single-Source Shortest Paths: The Recurrence

Let di(s, v) be the length of the shortest path Pi(s, v) from s to v that has at most i edges.

slide-96
SLIDE 96

Single-Source Shortest Paths: The Recurrence

Let di(s, v) be the length of the shortest path Pi(s, v) from s to v that has at most i edges. di(s, v) = ∞ if there is no path with at most i edges from s to v.

slide-97
SLIDE 97

Single-Source Shortest Paths: The Recurrence

Let di(s, v) be the length of the shortest path Pi(s, v) from s to v that has at most i edges. di(s, v) = ∞ if there is no path with at most i edges from s to v. d(s, v) = dn–1(s, v)

slide-98
SLIDE 98

Single-Source Shortest Paths: The Recurrence

Recurrence: If i = 0, then there exists a path from s to v with at most i edges only if v = s: d0(s, v) =

  • v = s

  • therwise

Let di(s, v) be the length of the shortest path Pi(s, v) from s to v that has at most i edges. di(s, v) = ∞ if there is no path with at most i edges from s to v. d(s, v) = dn–1(s, v)

slide-99
SLIDE 99

Single-Source Shortest Paths: The Recurrence

Recurrence: If i = 0, then there exists a path from s to v with at most i edges only if v = s: d0(s, v) =

  • v = s

  • therwise

If i > 0, then Let di(s, v) be the length of the shortest path Pi(s, v) from s to v that has at most i edges. di(s, v) = ∞ if there is no path with at most i edges from s to v. d(s, v) = dn–1(s, v)

slide-100
SLIDE 100

Single-Source Shortest Paths: The Recurrence

Recurrence: If i = 0, then there exists a path from s to v with at most i edges only if v = s: d0(s, v) =

  • v = s

  • therwise

If i > 0, then

  • Pi(s, v) has at most i – 1 edges or

Let di(s, v) be the length of the shortest path Pi(s, v) from s to v that has at most i edges. di(s, v) = ∞ if there is no path with at most i edges from s to v. d(s, v) = dn–1(s, v)

slide-101
SLIDE 101

Single-Source Shortest Paths: The Recurrence

Recurrence: If i = 0, then there exists a path from s to v with at most i edges only if v = s: d0(s, v) =

  • v = s

  • therwise

If i > 0, then

  • Pi(s, v) has at most i – 1 edges or
  • Pi(s, v) has i edges.

Let di(s, v) be the length of the shortest path Pi(s, v) from s to v that has at most i edges. di(s, v) = ∞ if there is no path with at most i edges from s to v. d(s, v) = dn–1(s, v)

slide-102
SLIDE 102

Single-Source Shortest Paths: The Recurrence

Recurrence: If i = 0, then there exists a path from s to v with at most i edges only if v = s: d0(s, v) =

  • v = s

  • therwise

If i > 0, then

  • Pi(s, v) has at most i – 1 edges or

⇒ Pi(s, v) = Pi–1(s, v)

  • Pi(s, v) has i edges.

Let di(s, v) be the length of the shortest path Pi(s, v) from s to v that has at most i edges. di(s, v) = ∞ if there is no path with at most i edges from s to v. d(s, v) = dn–1(s, v) v s Pi–1(s, v)

slide-103
SLIDE 103

Single-Source Shortest Paths: The Recurrence

Recurrence: If i = 0, then there exists a path from s to v with at most i edges only if v = s: d0(s, v) =

  • v = s

  • therwise

If i > 0, then

  • Pi(s, v) has at most i – 1 edges or

⇒ Pi(s, v) = Pi–1(s, v)

  • Pi(s, v) has i edges.

⇒ Pi(s, v) = Pi–1(s, u) ◦ (u, v) for some in-neighbour u of v. Let di(s, v) be the length of the shortest path Pi(s, v) from s to v that has at most i edges. di(s, v) = ∞ if there is no path with at most i edges from s to v. d(s, v) = dn–1(s, v) u v s Pi–1(s, v) Pi–1(s, u)

slide-104
SLIDE 104

Single-Source Shortest Paths: The Recurrence

Recurrence: If i = 0, then there exists a path from s to v with at most i edges only if v = s: d0(s, v) =

  • v = s

  • therwise

If i > 0, then Let di(s, v) be the length of the shortest path Pi(s, v) from s to v that has at most i edges. di(s, v) = ∞ if there is no path with at most i edges from s to v. d(s, v) = dn–1(s, v) di(s, v) = min(di–1(s, v), min{di–1(s, u) + w(u, v) | (u, v) ∈ E})

slide-105
SLIDE 105

Single-Source Shortest Paths: The Bellman-Ford Algorithm

BellmanFord(G, s)

1

for every vertex v ∈ G

2

do d[v] = ∞

3

P[v] = ∅

4

d[s] = 0

5

P[s] = [s]

6

for i = 1 to n – 1

7

do for every vertex v ∈ G

8

do for every in-edge e of v

9

do if d[e.tail] + e.weight < d[v]

10

then d[v] = d[e.tail] + e.weight

11

P[v] = [v] ++ P[e.tail]

12

return (d, P)

slide-106
SLIDE 106

Single-Source Shortest Paths: The Bellman-Ford Algorithm

BellmanFord(G, s)

1

for every vertex v ∈ G

2

do d[v] = ∞

3

P[v] = ∅

4

d[s] = 0

5

P[s] = [s]

6

for i = 1 to n – 1

7

do for every vertex v ∈ G

8

do for every in-edge e of v

9

do if d[e.tail] + e.weight < d[v]

10

then d[v] = d[e.tail] + e.weight

11

P[v] = [v] ++ P[e.tail]

12

return (d, P) Lemma: The single-source shortest paths problem can be solved in O(nm) time on any weighted graph, provided there are no negative cycles.

slide-107
SLIDE 107

All-Pairs Shortest Paths

Goal: Compute the distance d(u, v) (and the corresponding shortest path), for every pair of vertices u, v ∈ G.

slide-108
SLIDE 108

All-Pairs Shortest Paths

Goal: Compute the distance d(u, v) (and the corresponding shortest path), for every pair of vertices u, v ∈ G. First idea: Run single-source shortest paths from every vertex u ∈ G.

slide-109
SLIDE 109

All-Pairs Shortest Paths

Goal: Compute the distance d(u, v) (and the corresponding shortest path), for every pair of vertices u, v ∈ G. First idea: Run single-source shortest paths from every vertex u ∈ G. Complexity:

  • O(n2m) using Bellman-Ford
  • O(n2 lg n + nm) for non-negative edge weights using Dijkstra
slide-110
SLIDE 110

All-Pairs Shortest Paths

Goal: Compute the distance d(u, v) (and the corresponding shortest path), for every pair of vertices u, v ∈ G. First idea: Run single-source shortest paths from every vertex u ∈ G. Complexity:

  • O(n2m) using Bellman-Ford
  • O(n2 lg n + nm) for non-negative edge weights using Dijkstra

Improved algorithms:

  • Floyd-Warshall: O(n3)
slide-111
SLIDE 111

All-Pairs Shortest Paths

Goal: Compute the distance d(u, v) (and the corresponding shortest path), for every pair of vertices u, v ∈ G. First idea: Run single-source shortest paths from every vertex u ∈ G. Complexity:

  • O(n2m) using Bellman-Ford
  • O(n2 lg n + nm) for non-negative edge weights using Dijkstra

Improved algorithms:

  • Floyd-Warshall: O(n3)
  • Johnson: O(n2 lg n + nm) (really cool!)
  • Run Bellman-Ford from an arbitrary vertex s in O(nm) time.
  • Change edge weights so they are all non-negative but shortest paths don’t

change!

  • Run Dijkstra n times.
slide-112
SLIDE 112

All-Pairs Shortest Paths: The Recurrence

Number the vertices 1, 2, . . . , n. Let di(u, v) be the length of the shortest path Pi(u, v) that visits only vertices in {1, 2, . . . , i} ∪ {u, v}.

slide-113
SLIDE 113

All-Pairs Shortest Paths: The Recurrence

Number the vertices 1, 2, . . . , n. Let di(u, v) be the length of the shortest path Pi(u, v) that visits only vertices in {1, 2, . . . , i} ∪ {u, v}. d(u, v) = dn(u, v)

slide-114
SLIDE 114

All-Pairs Shortest Paths: The Recurrence

Number the vertices 1, 2, . . . , n. Let di(u, v) be the length of the shortest path Pi(u, v) that visits only vertices in {1, 2, . . . , i} ∪ {u, v}. d(u, v) = dn(u, v) If i = 0, P0(u, v) cannot visit any vertices other than u and v: d0(u, v) =

  • w(u, v)

(u, v) ∈ E ∞

  • therwise
slide-115
SLIDE 115

All-Pairs Shortest Paths: The Recurrence

Number the vertices 1, 2, . . . , n. Let di(u, v) be the length of the shortest path Pi(u, v) that visits only vertices in {1, 2, . . . , i} ∪ {u, v}. d(u, v) = dn(u, v) If i = 0, P0(u, v) cannot visit any vertices other than u and v: d0(u, v) =

  • w(u, v)

(u, v) ∈ E ∞

  • therwise

If i > 0, then Pi(u, v) includes vertex i or it doesn’t.

slide-116
SLIDE 116

All-Pairs Shortest Paths: The Recurrence

Number the vertices 1, 2, . . . , n. Let di(u, v) be the length of the shortest path Pi(u, v) that visits only vertices in {1, 2, . . . , i} ∪ {u, v}. d(u, v) = dn(u, v) If i = 0, P0(u, v) cannot visit any vertices other than u and v: d0(u, v) =

  • w(u, v)

(u, v) ∈ E ∞

  • therwise

If i / ∈ Pi(u, v), then Pi(u, v) = Pi–1(u, v). If i > 0, then Pi(u, v) includes vertex i or it doesn’t. i 1, 2, . . . , i – 1 Pi–1(u, v) v u

slide-117
SLIDE 117

All-Pairs Shortest Paths: The Recurrence

Number the vertices 1, 2, . . . , n. Let di(u, v) be the length of the shortest path Pi(u, v) that visits only vertices in {1, 2, . . . , i} ∪ {u, v}. d(u, v) = dn(u, v) If i = 0, P0(u, v) cannot visit any vertices other than u and v: d0(u, v) =

  • w(u, v)

(u, v) ∈ E ∞

  • therwise

If i / ∈ Pi(u, v), then Pi(u, v) = Pi–1(u, v). If i ∈ Pi(u, v), then Pi(u, v) = Pi–1(u, i) ◦ Pi–1(i, v). If i > 0, then Pi(u, v) includes vertex i or it doesn’t. i 1, 2, . . . , i – 1 Pi–1(u, v) Pi–1(u, i) Pi–1(i, v) v u

slide-118
SLIDE 118

All-Pairs Shortest Paths: The Recurrence

Number the vertices 1, 2, . . . , n. Let di(u, v) be the length of the shortest path Pi(u, v) that visits only vertices in {1, 2, . . . , i} ∪ {u, v}. d(u, v) = dn(u, v) If i = 0, P0(u, v) cannot visit any vertices other than u and v: d0(u, v) =

  • w(u, v)

(u, v) ∈ E ∞

  • therwise

If i / ∈ Pi(u, v), then Pi(u, v) = Pi–1(u, v). If i ∈ Pi(u, v), then Pi(u, v) = Pi–1(u, i) ◦ Pi–1(i, v). di(u, v) = min(di–1(u, v), di–1(u, i) + di–1(i, v)) If i > 0, then Pi(u, v) includes vertex i or it doesn’t.

slide-119
SLIDE 119

All-Pairs Shortest Paths: The Floyd-Warshall Algorithm

FloydWarshall(G)

1

for every pair of vertices u, v ∈ G

2

do d[u, v] = ∞

3

p[u, v] = Nothing

4

for every vertex v ∈ G

5

do d[v, v] = 0

6

p[v, v] = v

7

for every edge e ∈ G

8

do d[e.tail, e.head] = e.weight

9

p[e.tail, e.head] = e.tail

10

for i = 1 to n

11

do for every pair of vertices u, v ∈ G such that i / ∈ {u, v}

12

do if d[u, v] > d[u, i] + d[i, v]

13

then d[u, v] = d[u, i] + d[i, v]

14

p[u, v] = p[i, v]

15

return (d, p)

slide-120
SLIDE 120

All-Pairs Shortest Paths: The Floyd-Warshall Algorithm

FloydWarshall(G)

1

for every pair of vertices u, v ∈ G

2

do d[u, v] = ∞

3

p[u, v] = Nothing

4

for every vertex v ∈ G

5

do d[v, v] = 0

6

p[v, v] = v

7

for every edge e ∈ G

8

do d[e.tail, e.head] = e.weight

9

p[e.tail, e.head] = e.tail

10

for i = 1 to n

11

do for every pair of vertices u, v ∈ G such that i / ∈ {u, v}

12

do if d[u, v] > d[u, i] + d[i, v]

13

then d[u, v] = d[u, i] + d[i, v]

14

p[u, v] = p[i, v]

15

return (d, p) ReportPath(p, u, v)

1

if p[u, v] = Nothing

2

then return Nothing

3

P = [v]

4

while v = u

5

do v = p[u, v]

6

P.prepend(v)

7

return P

slide-121
SLIDE 121

All-Pairs Shortest Paths: The Floyd-Warshall Algorithm

FloydWarshall(G)

1

for every pair of vertices u, v ∈ G

2

do d[u, v] = ∞

3

p[u, v] = Nothing

4

for every vertex v ∈ G

5

do d[v, v] = 0

6

p[v, v] = v

7

for every edge e ∈ G

8

do d[e.tail, e.head] = e.weight

9

p[e.tail, e.head] = e.tail

10

for i = 1 to n

11

do for every pair of vertices u, v ∈ G such that i / ∈ {u, v}

12

do if d[u, v] > d[u, i] + d[i, v]

13

then d[u, v] = d[u, i] + d[i, v]

14

p[u, v] = p[i, v]

15

return (d, p) Lemma: The all-pairs shortest paths problem can be solved in O(n3) time, provided there are no negative cycles. ReportPath(p, u, v)

1

if p[u, v] = Nothing

2

then return Nothing

3

P = [v]

4

while v = u

5

do v = p[u, v]

6

P.prepend(v)

7

return P

slide-122
SLIDE 122

Summary

Both greedy algorithms and dynamic programming are applicable when the problem has optimal substructure: The optimal solution for a given input instance contains within it optimal solutions to smaller input instances.

slide-123
SLIDE 123

Summary

Both greedy algorithms and dynamic programming are applicable when the problem has optimal substructure: The optimal solution for a given input instance contains within it optimal solutions to smaller input instances. Greedy algorithms are applicable when an optimal solution can be obtained by making a locally optimal choice and then solving the resulting subproblem.

slide-124
SLIDE 124

Summary

Both greedy algorithms and dynamic programming are applicable when the problem has optimal substructure: The optimal solution for a given input instance contains within it optimal solutions to smaller input instances. Greedy algorithms are applicable when an optimal solution can be obtained by making a locally optimal choice and then solving the resulting subproblem. Dynamic programming exhaustively explores all possible choices and chooses the one that gives the best solution.

slide-125
SLIDE 125

Summary

Both greedy algorithms and dynamic programming are applicable when the problem has optimal substructure: The optimal solution for a given input instance contains within it optimal solutions to smaller input instances. Greedy algorithms are applicable when an optimal solution can be obtained by making a locally optimal choice and then solving the resulting subproblem. Dynamic programming exhaustively explores all possible choices and chooses the one that gives the best solution. Dynamic programming yields a faster solution than the naïve recursive algorithm when there are lots of overlapping subproblems.

slide-126
SLIDE 126

Summary

The design of a dynamic programming algorithm proceeds in two phases:

  • 1. Analyze the structure of an optimal solution to develop a recurrence for the cost of

an optimal solution.

  • 2. Develop an algorithm that uses the recurrence to compute an optimal solution
  • Recursively using memoization or
  • Iteratively by populating a table with the costs of the solutions to all possible

subproblems. Both types of algorithms compute optimal solutions botom-up.