Nisarg Shah 373F19 - Nisarg Shah 1 Recap Dynamic Programming - - PowerPoint PPT Presentation

nisarg shah
SMART_READER_LITE
LIVE PREVIEW

Nisarg Shah 373F19 - Nisarg Shah 1 Recap Dynamic Programming - - PowerPoint PPT Presentation

CSC373 Week 4: Dynamic Programming (contd) Network Flow (start) Nisarg Shah 373F19 - Nisarg Shah 1 Recap Dynamic Programming Basics Optimal substructure property Bellman equation Top-down (memoization) vs bottom-up


slide-1
SLIDE 1

CSC373 Week 4: Dynamic Programming (contd) Network Flow (start)

373F19 - Nisarg Shah 1

Nisarg Shah

slide-2
SLIDE 2

Recap

373F19 - Nisarg Shah 2

  • Dynamic Programming Basics

➢ Optimal substructure property ➢ Bellman equation ➢ Top-down (memoization) vs bottom-up implementations

  • Dynamic Programming Examples

➢ Weighted interval scheduling ➢ Knapsack problem ➢ Single-source shortest paths ➢ Chain matrix product

slide-3
SLIDE 3

This Lecture

373F19 - Nisarg Shah 3

  • Some more DP

➢ Edit distance (aka sequence alignment) ➢ Traveling salesman problem (TSP)

  • Start of network flow

➢ Problem statement ➢ Ford-Fulkerson algorithm ➢ Running time ➢ Correctness

slide-4
SLIDE 4
  • Edit distance (aka sequence alignment) problem

➢ How similar are strings 𝑌 = 𝑦1, … , 𝑦𝑛 and 𝑍 = 𝑧1, … , 𝑧𝑜?

  • Suppose we can delete or replace symbols

➢ We can do these operations on any symbol in either string ➢ How many deletions & replacements does it take to match

the two strings?

Edit Distance

373F19 - Nisarg Shah 4

slide-5
SLIDE 5
  • Example: ocurrance vs occurrence

Edit Distance

373F19 - Nisarg Shah 5

6 replacements, 1 deletion 1 replacement, 1 deletion

slide-6
SLIDE 6
  • Edit distance problem

➢ Input

  • Strings 𝑌 = 𝑦1, … , 𝑦𝑛 and 𝑍 = 𝑧1, … , 𝑧𝑜
  • Cost 𝑒(𝑏) of deleting symbol 𝑏
  • Cost 𝑠(𝑏, 𝑐) of replacing symbol 𝑏 with 𝑐
  • Assume 𝑠 is symmetric, so 𝑠 𝑏, 𝑐 = 𝑠(𝑐, 𝑏)

➢ Goal

  • Compute the minimum total cost for matching the two strings
  • Optimal substructure?

➢ Want to delete/replace at one end and recurse

Edit Distance

373F19 - Nisarg Shah 6

slide-7
SLIDE 7
  • Optimal substructure

➢ Goal: match 𝑦1, … , 𝑦𝑛 and 𝑧1, … , 𝑧𝑜 ➢ Consider the last symbols 𝑦𝑛 and 𝑧𝑜 ➢ Three options:

  • Delete 𝑦𝑛, and optimally match 𝑦1, … , 𝑦𝑛−1 and 𝑧1, … , 𝑧𝑜
  • Delete 𝑧𝑜, and optimally match 𝑦1, … , 𝑦𝑛 and 𝑧1, … , 𝑧𝑜−1
  • Match 𝑦𝑛 and 𝑧𝑜, and optimally match 𝑦1, … , 𝑦𝑛−1 and 𝑧1, … , 𝑧𝑜−1

➢ Hence in the DP, we need to compute the optimal solutions

for matching 𝑦1, … , 𝑦𝑗 with 𝑧1, … , 𝑧𝑘 for all (𝑗, 𝑘)

Edit Distance

373F19 - Nisarg Shah 7

slide-8
SLIDE 8
  • 𝐹[𝑗, 𝑘] = edit distance between 𝑦1, … , 𝑦𝑗 and 𝑧1, … , 𝑧𝑘
  • Bellman equation

𝐹 𝑗, 𝑘 = if 𝑗 = 𝑘 = 0 𝑒 𝑧𝑘 + 𝐹[𝑗, 𝑘 − 1] if 𝑗 = 0 ∧ 𝑘 > 0 𝑒 𝑦𝑗 + 𝐹[𝑗 − 1, 𝑘] if 𝑗 > 0 ∧ 𝑘 = 0 min{𝐵, 𝐶, 𝐷}

  • therwise

where 𝐵 = 𝑒 𝑦𝑗 + 𝐹 𝑗 − 1, 𝑘 , 𝐶 = 𝑒 𝑧𝑘 + 𝐹 𝑗, 𝑘 − 1 𝐷 = 𝑠 𝑦𝑗, 𝑧𝑘 + 𝐹[𝑗 − 1, 𝑘 − 1]

  • 𝑃(𝑜 ⋅ 𝑛) time, 𝑃(𝑜 ⋅ 𝑛) space

Edit Distance

373F19 - Nisarg Shah 8

slide-9
SLIDE 9

Edit Distance

373F19 - Nisarg Shah 9

𝐹 𝑗, 𝑘 = if 𝑗 = 𝑘 = 0 𝑒 𝑧𝑘 + 𝐹[𝑗, 𝑘 − 1] if 𝑗 = 0 ∧ 𝑘 > 0 𝑒 𝑦𝑗 + 𝐹[𝑗 − 1, 𝑘] if 𝑗 > 0 ∧ 𝑘 = 0 min{𝐵, 𝐶, 𝐷}

  • therwise

where 𝐵 = 𝑒 𝑦𝑗 + 𝐹 𝑗 − 1, 𝑘 , 𝐶 = 𝑒 𝑧𝑘 + 𝐹 𝑗, 𝑘 − 1 𝐷 = 𝑠 𝑦𝑗, 𝑧𝑘 + 𝐹[𝑗 − 1, 𝑘 − 1]

  • Space complexity can be improved to 𝑃(𝑜 + 𝑛)

➢ To compute 𝐹[⋅, 𝑘], we only need 𝐹 ⋅, 𝑘 − 1 stored ➢ So we can forget 𝐹[⋅, 𝑘] as soon as we reach 𝑘 + 2 ➢ But this is not enough if we want to compute the actual

solution (sequence of operations)

slide-10
SLIDE 10

Hirschberg’s Algorithm

373F19 - Nisarg Shah 10

  • The optimal solution can be computed in 𝑃 𝑜 ⋅ 𝑛

time and 𝑃(𝑜 + 𝑛) space too!

This slide is not in the scope of the course

slide-11
SLIDE 11

Hirschberg’s Algorithm

373F19 - Nisarg Shah 11

  • Key idea nicely combines divide & conquer with DP
  • Edit distance graph

𝑒(𝑦𝑗) 𝑒(𝑧𝑘) This slide is not in the scope of the course

slide-12
SLIDE 12

Hirschberg’s Algorithm

373F19 - Nisarg Shah 12

  • Observation (can be proved by induction)

➢ 𝐹[𝑗, 𝑘] = length of shortest path from (0,0) to (𝑗, 𝑘)

𝑒(𝑦𝑗) 𝑒(𝑧𝑘) This slide is not in the scope of the course

slide-13
SLIDE 13

Hirschberg’s Algorithm

373F19 - Nisarg Shah 13

  • Lemma

➢ Shortest path from (0,0) to (𝑛, 𝑜) passes through (𝑟, Τ

𝑜 2)

where 𝑟 minimizes length of shortest path from (0,0) to (𝑟, Τ

𝑜 2) + length of shortest path from (𝑟, Τ 𝑜 2) to (𝑛, 𝑜) This slide is not in the scope of the course

slide-14
SLIDE 14

Hirschberg’s Algorithm

373F19 - Nisarg Shah 14

  • Idea

➢ Find 𝑟 using divide-and-conquer ➢ Find shortest paths from (0,0) to (𝑟, Τ

𝑜 2) and (𝑟, Τ 𝑜 2) to

(𝑛, 𝑜) using DP

This slide is not in the scope of the course

slide-15
SLIDE 15

373F19 - Nisarg Shah 15

Application: Protein Matching

slide-16
SLIDE 16

Traveling Salesman

373F19 - Nisarg Shah 16

  • Input

➢ Directed graph 𝐻 = (𝑊, 𝐹) ➢ Distance 𝑒𝑗,𝑘 is the distance from node 𝑗 to node 𝑘

  • Output

➢ Minimum distance which needs to be traveled to start

from some node 𝑤, visit every other node exactly once, and come back to 𝑤

  • That is, the minimum cost of a Hamiltonian cycle
slide-17
SLIDE 17

Traveling Salesman

373F19 - Nisarg Shah 17

  • Approach

➢ Let’s start at node 𝑤1 = 1

  • It’s a cycle, so the starting point does not matter

➢ Want to visit the other nodes in some order, say 𝑤2, … , 𝑤𝑜 ➢ Total distance is 𝑒1,𝑤2 + 𝑒𝑤2,𝑤3 + ⋯ + 𝑒𝑤𝑜−1,𝑤𝑜 + 𝑒𝑤𝑜,1

  • Want to minimize this distance
  • Naïve solution

➢ Check all possible orderings ➢ 𝑜 − 1 ! = Θ

𝑜 ⋅

𝑜 𝑓 𝑜

(Stirling’s approximation)

slide-18
SLIDE 18

Traveling Salesman

373F19 - Nisarg Shah 18

  • DP Approach

➢ Consider 𝑤𝑜 (the last node before returning to 𝑤1 = 1)

  • If 𝑤𝑜 = 𝑑
  • We now want to find the optimal order of visiting nodes in

2, … , 𝑜 ∖ 𝑑

  • So we will need to keep track of which subset of nodes we need

to visit and where we need to end

➢ 𝑃𝑄𝑈 𝑇, 𝑑 = minimum total distance of starting at 1,

visiting each node in 𝑇 exactly once, and ending at 𝑑 ∈ 𝑇 (without counting the distance for returning from 𝑑 to 1)

  • Then the answer to our original problem can easily be computed

as min

𝑑∈𝑇 𝑃𝑄𝑈 𝑇, 𝑑 + 𝑒𝑑,1, where 𝑇 = {2, … , 𝑜}

slide-19
SLIDE 19

Traveling Salesman

373F19 - Nisarg Shah 19

  • DP Approach

➢ To compute 𝑃𝑄𝑈[𝑇, 𝑑], we condition over the vertex

which is visited right before 𝑑

  • Bellman equation

𝑃𝑄𝑈 𝑇, 𝑑 = min

𝑛∈𝑇∖ 𝑑 𝑃𝑄𝑈 𝑇 ∖ 𝑑 , 𝑛 + 𝑒𝑛,𝑑

Final solution = min

𝑑∈ 2,…,𝑜 𝑃𝑄𝑈 2, … , 𝑜 , 𝑑 + 𝑒𝑑,1

  • Time: 𝑃(𝑜 ⋅ 2𝑜) calls, 𝑃(𝑜) time per call ⇒ 𝑃 𝑜2 ⋅ 2𝑜

➢ Much better than the naïve solution which has

Τ

𝑜 𝑓 𝑜

slide-20
SLIDE 20

Traveling Salesman

373F19 - Nisarg Shah 20

  • Bellman equation

𝑃𝑄𝑈 𝑇, 𝑑 = min

𝑛∈𝑇∖ 𝑑 𝑃𝑄𝑈 𝑇 ∖ 𝑑 , 𝑛 + 𝑒𝑛,𝑑

Final solution = min

𝑑∈ 2,…,𝑜 𝑃𝑄𝑈 2, … , 𝑜 , 𝑑 + 𝑒𝑑,1

  • Space complexity: 𝑃 𝑜 ⋅ 2𝑜

➢ But computing the optimal solution with 𝑇 = 𝑙 only requires

storing the optimal solutions with 𝑇 = 𝑙 − 1

  • Question: Using this observation, how much can we reduce

the space complexity?

slide-21
SLIDE 21

DP Concluding Remarks

373F19 - Nisarg Shah 21

  • Key steps in designing a DP algorithm

➢ “Generalize” the problem first

  • E.g. instead of computing edit distance between strings 𝑌 =

𝑦1, … , 𝑦𝑛 and 𝑍 = 𝑧1, … , 𝑧𝑜, we compute 𝐹[𝑗, 𝑘] = edit distance between 𝑗-prefix of 𝑌 and 𝑘-prefix of 𝑍 for all (𝑗, 𝑘)

  • The right generalization is often obtained by looking at the

structure of the “subproblem” which must be solved optimally to get an optimal solution to the overall problem

➢ Remember the difference between DP and divide-and-

conquer

➢ Sometimes you can save quite a bit of space by only

storing solutions to those subproblems that you need in the future

slide-22
SLIDE 22

Network Flow

373F19 - Nisarg Shah 22

slide-23
SLIDE 23

Network Flow

373F19 - Nisarg Shah 23

  • Input

➢ A directed graph 𝐻 = (𝑊, 𝐹) ➢ Edge capacities 𝑑 ∶ 𝐹 → ℝ≥0 ➢ Source node 𝑡, target node 𝑢

  • Output

➢ Maximum “flow” from 𝑡 to 𝑢

slide-24
SLIDE 24

Network Flow

373F19 - Nisarg Shah 24

  • Assumptions

➢ For simplicity, assume that… ➢ No edges enters 𝑡 ➢ No edges comes out of 𝑢 ➢ Edge capacity 𝑑(𝑓) is a non-

negative integer

  • Later, we’ll see what happens

when 𝑑(𝑓) can be a rational number

slide-25
SLIDE 25

Network Flow

373F19 - Nisarg Shah 25

  • Flow

➢ An 𝑡-𝑢 flow is a function 𝑔: 𝐹 → ℝ≥0 ➢ Intuitively, 𝑔(𝑓) is the “amount of material” carried on

edge 𝑓

slide-26
SLIDE 26

Network Flow

373F19 - Nisarg Shah 26

  • Constraints on flow 𝑔
  • 1. Respecting capacities

∀𝑓 ∈ 𝐹 ∶ 0 ≤ 𝑔 𝑓 ≤ 𝑑(𝑓)

  • 2. Flow conservation

∀𝑤 ∈ 𝑊 ∖ 𝑡, 𝑢 ∶ σ𝑓 into 𝑤 𝑔 𝑓 = σ𝑓 leaving 𝑤 𝑔 𝑓

Flow in = flow out at every node other than 𝑡 and 𝑢 Flow out at 𝑡 = flow in at 𝑢

slide-27
SLIDE 27

Network Flow

373F19 - Nisarg Shah 27

  • 𝑔𝑗𝑜 𝑤 = σ𝑓 into 𝑤 𝑔 𝑓
  • 𝑔𝑝𝑣𝑢 𝑤 = σ𝑓 leaving 𝑤 𝑔 𝑓
  • Value of flow 𝑔 is 𝑤 𝑔 = 𝑔𝑝𝑣𝑢 𝑡 = 𝑔𝑗𝑜(𝑢)
  • Restating the problem:

➢ Given a directed graph 𝐻 = (𝑊, 𝐹) with edge capacities

𝑑: 𝐹 → ℝ≥0, find a flow 𝑔∗ with the maximum value.

slide-28
SLIDE 28

First Attempt

373F19 - Nisarg Shah 28

  • A natural greedy approach
  • 1. Start from zero flow (𝑔 𝑓 = 0 for each 𝑓).
  • 2. While there exists an 𝑡-𝑢 path 𝑄 in 𝐻 such that

𝑔 𝑓 < 𝑑(𝑓) for each 𝑓 ∈ 𝑄

a. Find one such path 𝑄 b. Increase the flow on each edge 𝑓 ∈ 𝑄 by min

𝑓∈𝑄 𝑑 𝑓 − 𝑔 𝑓

  • Let’s run it on an example!
slide-29
SLIDE 29

First Attempt

373F19 - Nisarg Shah 29

slide-30
SLIDE 30

First Attempt

373F19 - Nisarg Shah 30

slide-31
SLIDE 31

First Attempt

373F19 - Nisarg Shah 31

slide-32
SLIDE 32

First Attempt

373F19 - Nisarg Shah 32

slide-33
SLIDE 33

First Attempt

373F19 - Nisarg Shah 33

slide-34
SLIDE 34

First Attempt

373F19 - Nisarg Shah 34

slide-35
SLIDE 35

First Attempt

373F19 - Nisarg Shah 35

slide-36
SLIDE 36

First Attempt

373F19 - Nisarg Shah 36

  • Q: Why does the simple greedy approach fail?
  • A: Because once it increases the flow on an edge, it

is not allowed to decrease it.

  • Need a way to “reverse”

bad decisions

slide-37
SLIDE 37

Reversing Bad Decisions

373F19 - Nisarg Shah 37

s t u v

𝟑𝟏/20 𝟑𝟏/30 𝟑𝟏/20 0/10 0/10

Suppose we start by sending 20 units of flow along this path s t u v

𝟑𝟏/20 𝟐𝟏/30 𝟑𝟏/20 𝟐𝟏/10 𝟐𝟏/10

But the optimal configuration requires 10 fewer units of flow on 𝑣 → 𝑤

slide-38
SLIDE 38

Reversing Bad Decisions

373F19 - Nisarg Shah 38

We can essentially send a “reverse” flow of 10 units along 𝑤 → 𝑣 s t u v

𝟑𝟏/20 𝟐𝟏/30 𝟑𝟏/20 𝟐𝟏/10 𝟐𝟏/10

So now we get this optimal flow s t u v

𝟑𝟏/20 𝟑𝟏/30 𝟑𝟏/20 𝟐𝟏/10 𝟐𝟏/10 𝟐𝟏

slide-39
SLIDE 39

Residual Graph

373F19 - Nisarg Shah 39

  • Define the residual graph 𝐻𝑔 of flow 𝑔

➢ 𝐻𝑔 has the same vertices as 𝐻 ➢ For each edge e = (𝑣, 𝑤) in 𝐻, 𝐻𝑔 has at most two edges

  • Forward edge 𝑓 = (𝑣, 𝑤) with capacity 𝑑 𝑓 − 𝑔 𝑓
  • We can send this much additional flow on 𝑓
  • Reverse edge 𝑓𝑠𝑓𝑤 = (𝑤, 𝑣) with capacity 𝑔(𝑓)
  • The maximum “reverse” flow we can send is the maximum

amount by which we can reduce flow on 𝑓, which is 𝑔(𝑓)

  • We only add each edge if its capacity > 0
slide-40
SLIDE 40

Residual Graph

373F19 - Nisarg Shah 40

  • Example!

s t u v

20/20 20/30 20/20 0/10 0/10

s t u v

𝟑𝟏 𝟐𝟏 𝟑𝟏 𝟐𝟏 𝟐𝟏 𝟑𝟏

Flow 𝑔 Residual graph 𝐻𝑔

slide-41
SLIDE 41

Augmenting Paths

373F19 - Nisarg Shah 41

  • Let 𝑄 be an 𝑡-𝑢 path in the residual graph 𝐻𝑔
  • Let bottleneck(𝑄, 𝑔) be the smallest capacity across

all edges in 𝑄

  • “Augment” flow 𝑔 by “sending” bottleneck 𝑄, 𝑔

units of flow along 𝑄

➢ What does it mean to send 𝑦 units of flow along 𝑄? ➢ For each forward edge 𝑓 ∈ 𝑄, increase the flow on 𝑓 by 𝑦 ➢ For each reverse edge 𝑓𝑠𝑓𝑤 ∈ 𝑄, decrease the flow on 𝑓 by 𝑦

slide-42
SLIDE 42

Augmenting Paths

373F19 - Nisarg Shah 42

  • Let’s argue that the new flow is a valid flow
  • Capacity constraints:

➢ If we increase flow on 𝑓, we can do so by at most the

capacity of forward edge 𝑓 in 𝐻𝑔, which is 𝑑 𝑓 − 𝑔 𝑓

  • So the new flow can be at most 𝑔 𝑓 + 𝑑 𝑓 − 𝑔 𝑓

= 𝑑(𝑓)

➢ If we decrease flow on 𝑓, we can do so by at most the

capacity of reverse edge 𝑓𝑠𝑓𝑤 in 𝐻𝑔, which is 𝑔 𝑓

  • So the new flow is at least 𝑔 𝑓 − 𝑔 𝑓 = 0
slide-43
SLIDE 43

Augmenting Paths

373F19 - Nisarg Shah 43

  • Let’s argue that the new flow is a valid flow
  • Flow conservation:

➢ Each node on the path (except 𝑡 and 𝑢) has exactly two

incident edges

  • Both forward / both reverse ⇒ one is incoming, one is outgoing
  • One forward, one reverse ⇒ both incoming / both outgoing
  • Net flow remains 0

s t +𝑦 +𝑦 −𝑦 −𝑦 +𝑦

slide-44
SLIDE 44

Ford-Fulkerson Algorithm

373F19 - Nisarg Shah 44

MaxFlow(𝐻):

// initialize: Set 𝑔 𝑓 = 0 for all 𝑓 in 𝐻 // while there is an 𝑡-𝑢 path in 𝐻𝑔: While 𝑄 = FindPath(s, t,Residual(𝐻, 𝑔))!=None: 𝑔 = Augment(𝑔, 𝑄) UpdateResidual(𝐻,𝑔) EndWhile Return 𝑔

slide-45
SLIDE 45

Ford-Fulkerson Algorithm

373F19 - Nisarg Shah 45

  • Running time:

➢ #Augmentations:

  • At every step, flow and capacities remain integers
  • For path 𝑄 in 𝐻𝑔, bottleneck 𝑄, 𝑔 > 0 implies bottleneck 𝑄, 𝑔 ≥ 1
  • Each augmentation increases flow by at least 1
  • At most 𝐷 = σ𝑓 leaving 𝑡 𝑑(𝑓) augmentations

➢ Time for an augmentation:

  • 𝐻𝑔 has 𝑜 vertices and at most 2𝑛 edges
  • Finding an 𝑡-𝑢 path in 𝐻𝑔 takes 𝑃(𝑛 + 𝑜) time

➢ Total time: 𝑃( 𝑛 + 𝑜 ⋅ 𝐷)

slide-46
SLIDE 46

Ford-Fulkerson Algorithm

373F19 - Nisarg Shah 46

  • Total time: 𝑃( 𝑛 + 𝑜 ⋅ 𝐷)

➢ This is pseudo-polynomial time ➢ 𝐷 can be exponentially large in the input length (the number

  • f bits required to write down the edge capacities)

➢ Note: We assumed integer capacities, but this also gives a

pseudo-polynomial time algorithm for rational capacities

  • Why?
  • Q: Can we convert this to polynomial time?
slide-47
SLIDE 47

Ford-Fulkerson Algorithm

373F19 - Nisarg Shah 47

  • Q: Can we convert this to polynomial time?

➢ Not if we choose an arbitrary path in 𝐻𝑔 at each step ➢ In the graph below, we might end up repeatedly sending 1

unit of flow across 𝑏 → 𝑐 and then reversing it

  • Takes 𝑌 steps, which can be exponential in the input length
slide-48
SLIDE 48

Ford-Fulkerson Algorithm

373F19 - Nisarg Shah 48

  • Ways to achieve polynomial time

➢ Find the shortest augmenting path using BFS

  • Edmonds-Karp algorithm
  • Runs in 𝑃 𝑜𝑛2 time
  • Can be found in CLRS

➢ Find the maximum bottleneck capacity augmenting path

  • Runs in 𝑃 𝑛2 ⋅ log 𝐷 time
  • “Weakly polynomial time” (number of arithmetic operations

depends on the number of bits used to write integers)

➢ …

slide-49
SLIDE 49

Max Flow Problem

373F19 - Nisarg Shah 49

  • Race to reduce the running time

➢ 1972: 𝑃 𝑜 𝑛2 Edmonds-Karp ➢ 1980: 𝑃 𝑜 𝑛 log2 𝑜 Galil-Namaad ➢ 1983: 𝑃 𝑜 𝑛 log 𝑜 Sleator-Tarjan ➢ 1986: 𝑃 𝑜 𝑛 log

Τ

𝑜2 𝑛

Goldberg-Tarjan

➢ 1992: 𝑃 𝑜 𝑛 + 𝑜2+𝜗 King-Rao-Tarjan ➢ 1996: 𝑃 𝑜 𝑛 log

𝑛 𝑜 log 𝑜 𝑜 King-Rao-Tarjan

  • Note: These are 𝑃(𝑜 𝑛) when 𝑛 = 𝜕 𝑜

➢ 2013: 𝑃(𝑜 𝑛) Orlin

  • Breakthrough!
slide-50
SLIDE 50

Back to Ford-Fulkerson

373F19 - Nisarg Shah 50

  • We argued that the algorithm must terminate, and

must do so in 𝑃 𝑛 + 𝑜 ⋅ 𝐷 time

  • But we didn’t argue correctness yet, i.e., the

algorithm must terminate with the optimal flow

slide-51
SLIDE 51

Cuts and Cut Capacities

373F19 - Nisarg Shah 51

  • (𝐵, 𝐶) is an 𝑡-𝑢 cut if it is a partition of vertex set

(i.e. 𝐵 ∪ 𝐶 = 𝑊, 𝐵 ∩ 𝐶 = ∅), 𝑡 ∈ 𝐵, and 𝑢 ∈ 𝐶

  • Capacity of this cut, denoted 𝑑𝑏𝑞 𝐵, 𝐶 , is the sum
  • f capacities of edges leaving 𝐵
slide-52
SLIDE 52

Cuts and Flows

373F19 - Nisarg Shah 52

  • Theorem: For any flow 𝑔 and any 𝑡-𝑢 cut (𝐵, 𝐶),

𝑤 𝑔 = 𝑔𝑝𝑣𝑢 𝐵 − 𝑔𝑗𝑜(𝐵)

  • Proof: Just need to apply flow conservation

(exercise!)

slide-53
SLIDE 53

Cuts and Flows

373F19 - Nisarg Shah 53

  • Theorem: For any flow 𝑔 and any 𝑡-𝑢 cut (𝐵, 𝐶),

𝑤 𝑔 ≤ 𝑑𝑏𝑞(𝐵, 𝐶)

  • Proof:

𝑤 𝑔 = 𝑔𝑝𝑣𝑢 𝐵 − 𝑔𝑗𝑜 𝐵 ≤ 𝑔𝑝𝑣𝑢 𝐵 = ෍

𝑓 leaving 𝐵

𝑔 𝑓 ≤ ෍

𝑓 leaving 𝐵

𝑑 𝑓 = 𝑑𝑏𝑞(𝐵, 𝐶)

slide-54
SLIDE 54

Cuts and Flows

373F19 - Nisarg Shah 54

  • Theorem: For any flow 𝑔 and any 𝑡-𝑢 cut (𝐵, 𝐶),

𝑤 𝑔 ≤ 𝑑𝑏𝑞(𝐵, 𝐶)

  • So, the maximum flow is at most the minimum

capacity of any cut.

  • In fact, we will show that the maximum flow is

equal to the minimum capacity of any cut.

➢ To demonstrate the correctness (i.e. optimality) of Ford-

Fulkerson algorithm, all we need to show is that the flow it generates is equal to the capacity of some cut.

slide-55
SLIDE 55

Cuts and Flows

373F19 - Nisarg Shah 55

  • Theorem: Ford-Fulkerson finds maximum flow.
  • Proof:

➢ Let 𝑔∗ denote the flow returned by Ford-Fulkerson. ➢ Look at 𝐻𝑔∗ but define a cut in 𝐻

slide-56
SLIDE 56

Cuts and Flows

373F19 - Nisarg Shah 56

  • Theorem: Ford-Fulkerson finds maximum flow.
  • Proof:

➢ 𝐵∗, 𝐶∗ is a valid cut because there is no 𝑡-𝑢 path in 𝐻𝑔∗

when Ford-Fulkerson terminates, so 𝑢 ∉ 𝐵∗

slide-57
SLIDE 57

Cuts and Flows

373F19 - Nisarg Shah 57

  • Theorem: Ford-Fulkerson finds maximum flow.
  • Proof:

➢ Blue edges = edges going out of 𝐵∗ in 𝐻 ➢ Red edges = edges coming into 𝐵∗ in 𝐻

slide-58
SLIDE 58

Cuts and Flows

373F19 - Nisarg Shah 58

  • Theorem: Ford-Fulkerson finds maximum flow.
  • Proof:

➢ Each blue edge 𝑣, 𝑤 must be saturated

  • Otherwise 𝐻𝑔 has a forward edge 𝑣, 𝑤 and then 𝑤 ∈ 𝐵∗

➢ Each red edge (𝑤, 𝑣) must have zero flow

  • Otherwise 𝐻𝑔 has the reverse edge (𝑣, 𝑤) and then 𝑤 ∈ 𝐵∗
slide-59
SLIDE 59

Cuts and Flows

373F19 - Nisarg Shah 59

  • Theorem: Ford-Fulkerson finds maximum flow.
  • Proof:

➢ Each blue edge 𝑣, 𝑤 must be saturated ➢ Each red edge (𝑤, 𝑣) must have zero flow ➢ So 𝑤 𝑔∗ = 𝑑𝑏𝑞 𝐵∗, 𝐶∗ ∎

slide-60
SLIDE 60

Max Flow - Min Cut

373F19 - Nisarg Shah 60

  • Theorem: In any graph, the value of the maximum

flow is equal to the capacity of the minimum cut.

  • Our proof already showed that Ford-Fulkerson can

be used to find the min cut

➢ Find the max flow 𝑔∗ ➢ Let 𝐵∗ = set of all nodes reachable from 𝑡 in 𝐻𝑔∗

  • Easy to compute using BFS

➢ Then (𝐵∗, 𝑊 ∖ 𝐵∗) is min cut

slide-61
SLIDE 61

Why Study Flow Networks?

373F19 - Nisarg Shah 61

  • Unlike divide-and-conquer, greedy, or dynamic

programming, this doesn’t seem like a framework

➢ It is more like a single problem

  • It turns out that many problems can be reduced to

this single problem

➢ Hence, it is a very versatile technique

  • Next lecture!