SLIDE 1 Chapter 24: Single-Source Shortest Paths. Context: [G = (V, E), w : E → R] is a weighted, directed graph. Definitions:
- 1. A path from vertex u to vertex v, denoted p : u v, is a vertex vector
p = (u = x0, x1, . . . , xk = v), with (xi, xi+1) ∈ E for 0 ≤ i ≤ k − 1. The weight
j=1 w(xk−1, xk).
- 2. The shortest weight path from u to v is
δ(u, v) = min{w(p) : p is a path from u to v}, if such paths exist ∞,
The Single-Source-Shortest-Path (SSSP) algorithm accepts a source vertex s ∈ V and calculates v.d = δ(s, v) for all v ∈ V . Observations.
- 1. SSSP also solves the Single-Destination-Shortest-Path (SDSP) problem by run-
ning SSSP on GT.
- 2. SSSP solves the Single-Pair-Shortest-Path (SPSP) for vertices u and v by run-
ning SSSP on the entire graph, using u as the source. All known algorithms for SPSP have the same worst-case cost as SSSP.
- 3. SSSP solves the All-Pairs-Shortest-Path (APSP) by running SSSP once with
each vertex as the source. Chapter 25 covers more efficient algorithms.
- 4. Negative weights. If there exists a negative weight cycle,
δ(s, v) = −∞, v is reachable from s via a path touching a negative cycle ∞, v is not reachable from s finite, v is reachable from s and no p : s v touches a negative cycle
1
SLIDE 2
Lemma 24.1 (Optimal Substructure) Let G = (V, E), w : E → R be a directed, weighted graph. Let p = (v0, v1, . . . , vk) be a shortest path v0 vk. Then pij : (vi, vi+1, . . . , vj) is a shortest path from vi to vj, for all 0 ≤ i < j ≤ k. Proof: If not, excise the segment vi vj and replace with the shorter alternative. This operation produces a path p′ : v0 vk that is shorter than p, which is a contradiction. Note that negative weights do not invalidate this argument.
2
SLIDE 3
Generic Algorithm Initialize(G = (V, E), s) { for (v ∈ V ) { v.d = ∞; v.π = null; } s.d = 0 } Relax((u, v), w) { if v.d > u.d + w(u, v) { v.d = u.d + w(u, v); v.π = u; } } Different algorithms result by using different strategies to deploy Relax operations. We will examine two such strategies (a) the Bellman-Ford algorithm, and (b) Dijkstra’s algorithm. Conjecture: The edges (v.π, v) construct a Gπ-tree, the predecessor graph, as in the breadth-first and depth-first algorithms. Specifically, Gπ = (Vπ, Eπ) Vπ = {v ∈ V : v.π = null} ∪ {s} Eπ = {(v.π, v) : v ∈ Vπ\s}, and the v.π attributes can be used to determine a shortest-path from any reachable v back to s. In the current context, shortest now means minimal weight sum on the connecting links.
3
SLIDE 4 Properties: Lemma 24.10: (Triangle Inequality) (u, v) ∈ E implies δ(s, v) ≤ δ(s, u)+w(u, v). Proof: If u is not reachable from s, then δ(s, u) = ∞ by definition and the inequality cannot fail. If u is reachable from s, then the path s u → v is a competitor in the competition
- ver all such paths to establish δ(s, v). If we establish s u via the shortest
possible path, the competitor achieves δ(s, u)+w(u, v) which must be greater than the minimum that gives δ(s, v).
4
SLIDE 5
Lemma 24.11: (Upper-bound property): (a) v.d ≥ δ(s, v) at all times; (b) If v.d = δ(s, v) at some point in the algorithm, then it can undergo no further change. Proof: We note that v.d can change only when an edge of the form (u, v) is relaxed. We proceed via induction on the number of relax operations. Before the first such relaxation, the initialization routine sets v.d = ∞ for all v = s. Consequently, v.d ≥ δ(s, v) for all v = s. As for s, the initialization routine sets s.d = 0, whereas δ(s, s) = 0 or −∞, depending on whether or not s lies on a negative cycle. In either case s.d ≥ δ(s, s). Relax((u, v), w) { if v.d > u.d + w(u, v) { v.d = u.d + w(u, v); v.π = u; } } Now, when an edge (u, v) is relaxed, only v.d is potentially changed. Hence, we assume via induction that u.d ≥ δ(s, u). Then, if the relaxation changes v.d, we have v.d = u.d + w(u, v) ≥ δ(s, u) + w(u, v) ≥ δ(s, v), the last by the triangle inequality. We conclude that v.d ≥ δ(s, v) for all v ∈ V throughout the operation of the algorithm, which completes the proof of part (a). For part (b), suppose v.d = δ(s, v) occurs at some point. We note that the Relax((u, v), w) code either leaves v.d the same or lowers its to a strictly lower value. If, on entry, Relax((u, v), w) encounters v.d = δ(s, v), then it must leave v.d un- changed, since otherwise it would produce v.d < δ(s, v), contradicting (a).
5
SLIDE 6 Lemma 24.12: (No-path property): If no path s v exists, then v.d = ∞ at all times. Proof: As a trivial path exists from s to s, the property is true when v = s. If v = s and no path exists s v, then δ(s, v) = ∞ by definition. Moreover, the initialization routine sets v.d = ∞. As the Upper-bound property assures that v.d ≥ δ(s, v) at all times, we conclude that v.d = ∞ persists throughout the algorithm. Lemma 24.14: (Convergence property): If p : s u → v is a shortest path from s to v, and u.d = δ(s, u) at any time prior to relaxing edge (u, v), then v.d = δ(s, v) at all times after that relaxation. Proof: Given the hypothesis, we have u.d = δ(s, u) prior to relaxing edge (u, v). In that relaxation, one of two actions is taken. One possibility occurs when v.d ≤ u.d + w(u, v) when the relaxation starts. The
- ther possibility occurs when the relaxation sets v.d = u.d+w(u, v). In either case,
after the relaxation, we have v.d ≤ u.d + w(u, v) = δ(s, u) + w(u, v) = δ(s, v), where the last equality follows because s u → v is a shortest path, and therefore its length must be then be δ(s, v). Hence v.d ≤ δ(s, v). Since the upper-bound property forces v.d ≥ δ(s, v) at all times, we must have v.d = δ(s, v). Part (b) of the upper-bound property then insists that v.d = δ(s, v) at all subsequent times.
6
SLIDE 7 Lemma 24.15: (Path-relaxation property): If p = (s = v0, v1, . . . , vk) is a shortest path from s to vk, and edges (s = v0, v1), (v1, v2), . . . , (vk−1, vk) are relaxed in this
- rder, then vk.d = δ(s, vk) at all times after this relaxation sequence. The property
holds regardless of any other relaxations that are interleaved with the ordered sequence. Proof: We achieve the final result by showing the vi.d = δ(s, vi) after the ith
- relaxation. We note that the existence of a shortest path p : s vk implies that
no negative cycles are touched on any path from s to vk. Hence δ(s, s) = 0 = s.d after initialization, and via the upper-bound property, at all times thereafter. We then have the desired result for i = 0. Proceeding by induction, we assume that vi−1.d = δ(s, vi−1) when we perform a subsequent relaxation of edge (vi−1, vi). That relaxation must occur after unpacking vi from the adjacency list of vi−1. Then, since vi−1 = δ(s, vi−1) at that time, the convergence property forces vi.d = δ(s, vi) after the relaxation. Moreover, relaxations interleaved between the point where vi−1.d = δ(s, vi−1) and the specific relaxation that forces vi.d = δ(s, vi) have no bearing on this argument. Lemma 24.17: (Predecessor subgraph property): In the absence of negative cy- cles, once v.d = δ(s, v) is established for all v ∈ V , then the Gπ graph is a shortest- path tree rooted at s. Specifically Gπ = (Vπ, Eπ) satisfies
- 1. Vπ = {v ∈ V : v is reachable from s}.
- 2. Gπ = (Vπ, Eπ) is a tree rooted at s.
- 3. For all v ∈ Vπ, the unique simple path p : s v is a shortest path from s to v.
Proof: deferred to chapter’s end.
7
SLIDE 8 Bellman-Ford(G = (V, E), w : E → R, s ∈ V ) { Initialize(G, s); { for (i = 1 to V − 1) for (u, v) ∈ E Relax((u, v), w); for (u, v) ∈ E if v.d > u.d + w(u, v) return false; return true; } Observations.
- 1. A false return implies negative cycle reachable from s: to be proved
- 2. Running time is Θ(EV + E) = Θ(EV ): obvious.
8
SLIDE 9
Bellman-Ford(G = (V, E), w : E → R, s ∈ V ) { Initialize(G, s); { for (i = 1 to V − 1) for (u, v) ∈ E Relax((u, v), w); for (u, v) ∈ E if v.d > u.d + w(u, v) return false; return true; } Lemma 24.2: Let [G = (V, E), w : E → R] be a weighted, directed graph with no negative cycles reachable from s. After termination of the first for-loop in Bellman-Ford, we have v.d = δ(s, v) for all vertices v reachable from s. Proof: Suppose v is reachable via p = (s = v0, v1, . . . , vk = v), a shortest path. As every edge is relaxed in each iteration: i = 1, 2, . . . , |V | − 1, the edges of path p will appear as Initialize . . . Relax((v0, v1), w) . . . Relax((v1, v2), w) . . . Relax((v2, v3), w) . . . . . . Relax((vk−1, vk), w) By the path relaxation property, vi.d = δ(s, vi) for i = 0, 1, 2, . . . , k at conclusion. But k ≤ V − 1, since any shortest path has V − 1 edges or less.
9
SLIDE 10 Corollary 24.3: Let [G = (V, E), w : E → R] be a weighted, directed graph with no negative cycles reachable from s. Then for all v ∈ V , v is reachable from s if and only if v.d < ∞ at the conclusion of Bellman-Ford. Proof: v reachable from s implies δ(s, v) < ∞ by the definition of δ(s, v). Also, v reachable from s implies (Lemma 24.2) that v.d = δ(s, v) at the conclusion of the first for-loop in Bellman-Ford. The second for-loop makes no further adjustments to the v.d values. Consequently, v.d < ∞ at the conclusion of the algorithm. Conversely, v.d < ∞ at termination implies δ(s, v) ≤ v.d < ∞ by the upper-bound
- lemma. Therefore, v is reachable.
10
SLIDE 11 Theorem 24.4: (Bellman-Ford correctness) (a) Let [G = (V, E), w : E → R] be a weighted, directed graph with no negative cycles reachable from s. Then Bellman-Ford returns true, v.d = δ(s, v) for all v ∈ V , and and Gπ is a shortest-path tree rooted at s. (b) Let [G = (V, E), w : E → R] be a weighted, directed graph with a negative cycles reachable from s. Then Bellman-Ford returns false. Proof: (a) For this part, we assume [G = (V, E), w : E → R] be a weighted, directed graph with no negative cycles reachable from s. If v is reachable from s, then Lemma 22.2 ensures that v.d = δ(s, v) < ∞ at the conclusion of the first for-loop. If v is not reachable from s, then Corollary 22.3 ensures that v.d = ∞ = δ(s, v) at termination, and since no v.d values are changed in the second for-loop, it must be the case that v.d = ∞ = δ(s, v) at the conclusion of the first for-loop. That is, at the conclusion of the first for-loop, v.d = δ(s, v) for all v ∈ V . Consequently, for each (u, v) tested in the second for-loop, we have v.d = δ(s, v) ≤ δ(s, u) + w(u, v) = u.d + w(u, v). If follows that the test (if v.d > u.d + w(u, v) . . . ) is always false. We conclude that the algorithm returns true with v.d = δ(s, v) for all v ∈ V . At this point, the predecessor subgraph property ensures that the Gπ tree is a shortest-path tree rooted at s. (b) Let the negative cycle reachable from s be c = (v0, v1, . . . , vk = v0). Then
k
w(vi−1, vi) < 0. (∗) As the cycle is reachable from s, each vi is reachable from s. Corollary 24.3 then implies vi.d < ∞ for 0 ≤ i < k at the conclusion of the first for-loop. If Bellman-Ford returns true, then vi.d ≤ vi−1.d + w(vi−1, vi), for 1 ≤ i ≤ k
k
vi.d ≤
k
vi−1.d +
k
w(vi−1, vi) =
k−1
vi.d +
k
w(vi−1, vi).
11
SLIDE 12 Since v0 = vk, we have
k
vi.d =
k−1
vi.d
k
w(vi−1, vi) ≥ 0, a contradicting (∗) above. We conclude that Bellman-Ford returns false.
12
SLIDE 13 DAG-Shortest-Paths(G = (V, E), w : E → R, s ∈ V ) { TopoSort vertices, giving V = (v1, v2, . . . , vn); O(V + E) Initialize(G, s); O(V ) for v ∈ {v1, . . . , vn} for u ∈ v.adj Relax((v, u), w); } As TopoSort, as well as the other activity in the code, are O(V + E), we have that DAG-Shortest-Paths is O(V + E) — a linear algorithm. Theorem 24.5: Suppose G = (V, E), w : E → R is a weighted, acyclic, directed
- graph. Then, at the conclusion of DAG-Shortest-Paths(G, w, s), we have v.d =
δ(s, v) for all v ∈ V and Gπ is a shortest-path tree. Proof: If v is not reachable from s, then δ(s, v) = ∞ by definition. Moreover, v.d = ∞ when the algorithm concludes. If v is reachable from s, let p : (s = v0, v1, . . . , vk = v) be a shortest path. Because the edges are relaxed in the order given by the topological sort, in which all edges point forward, the relaxation sequence is (s = v0, v1), (v1, v2), . . . , (vk−1, vk = v). By the Path-relaxation property, v.d = δ(s, v) when the algorithm concludes. Finally, at the conclusion of the algorithm, the predecessor subgraph property ensures that Gπ is a shortest-path tree rooted at s .
13
SLIDE 14 We now consider Dijkstra’s algorithm, which assumes no negative weights. Dijkstra(G = (V, E), w : E → R, s ∈ V ) { Initialize(G, s); { A = φ; Q ← V ; //Θ(V ); minheap Q ordered by v.d while Q = φ { u = Q.extractMin(); O(V lg V ) A = A ∪ {u}; for v ∈ u.adj Relax((u, v), w); O(E lg V ); v.d decrease while v ∈ Q } } Observations.
- 1. Set A exists to facilitate proof of correctness.
- 2. Complexity: We have O(V ) prior to the while loop, plus Θ(V ) while-loop
executions, each involving an O(lg V ) extraction. Each edge is relaxed exactly
- nce — when found on the adjacency list of its source vertex. The relaxation
involves decreasing the key v.d, which incurs cost O(lg V ) when v ∈ Q. The total is O(V + V lg V + E lg V ) = O((V + E) lg V ), just above linear, as compared to O(V E) for the Bellman-Ford algorithm.
14
SLIDE 15
Theorem 24.6: (Dijkstra Correctness): On a weighted graph G = (V, E), w : E → R+ and source s, Dijkstra terminates with u.d = δ(s, u) for all u ∈ V . Proof via loop-invariant: At the start and finish of each while-loop iteration, v.d = δ(s, v) for all v ∈ A. As A = V when the algorithm concludes, proof of this invariant is proof of the theorem. In the base case, A = φ at the start of the first iteration, and the invariant is vacuously true. The first Q extraction yields vertex s, since Initialize set s.d = 0 and v.d = ∞ for all v = s. Thus at the conclusion of the first iteration, A = {s} and s.d = 0 = δ(s, s) as required. For the inductive step, suppose, for purposes of deriving a contradiction, that u is the first vertex for which u.d = δ(s, u) when u is added to A. Then u = s, since s.d = 0 = δ(s, s) when s is added to A. Also, A = φ when u is added because s ∈ A. Also, there exists path s → u. Otherwise δ(s, u) = ∞ by definition and u.d = ∞ by the No-Path property, which would force u.d = δ(s, u).
15
SLIDE 16 Let p be a shortest path from s to u. The sketch shows the situation just before u is added to A. Specifically, there exists y
- n the path, which is the first vertex, fol-
lowing the path from s, that lies outside
- A. Let x be the predecessor of y on this
path. By the choice of y, x.d = δ(s, x) when x was added to A. Just after x is added to A, its adjacency list is explored, y is found, and Relax((x, y), w) is called. By the Convergence Property, y.d = δ(s, y) at all times after that relaxation, includ- ing in particular at the later time when u is added to A. Now, along the shortest path p : s x → y u, we have δ(s, y) ≤ δ(s, u) by the
- ptimal substructure lemma.
Also, both u and y were in V \A when u was extracted. Therefore u.d ≤ y.d by the
- rdering property of the minQueue.
So, y.d = δ(s, y) ≤ δ(s, u) ≤ u.d ≤ y.d, the second-last by the Upper-Bound Property, when u is added to A. This relationship implies equality throughout, and in particular, δ(s, u) = u.d when u is added to A. This is the desired contradiction, as u was chosen to be the first vertex for which u.d = δ(s, u) when u is added to A. We conclude that the Dijkstra Algorithm is correct.
16
SLIDE 17 There remains only to show that in either algorithm (Bellman-Ford or Dijkstra), the Gπ tree is indeed a shortest-path tree rooted at s. Lemma 24.17: (Predecessor subgraph property) In the absence of negative cycles,
- nce v.d = δ(s, v) is established for all v ∈ V , then the Gπ graph is a shortest-path
tree rooted at s. That is, Gπ = (Vπ, Eπ) satisfies
- 1. Vπ = {v ∈ V : v is reachable from s}.
- 2. Gπ = (Vπ, Eπ) is a tree rooted at s.
- 3. For all v ∈ Vπ, the unique simple path p : s v is a shortest path from s to v.
Proof: We recall Gπ = (Vπ, Eπ) Vπ = {v ∈ V : v.π = null} ∪ {s} Eπ = {(v.π, v) : v ∈ Vπ\s}. We establish the required properties for a shortest path tree in a series of steps. (a) We first show that no Relax operation can introduce a cycle in Gπ. For purposes
- f deriving a contradiction, suppose Relax((vk−1, v0), w) creates the simple cy-
cle (v0, v1, . . . , vk−1, v0) by assigning v0.π = vk−1. That is, when this Relax executes, we have vi.π = vi−1 for 1 ≤ i ≤ k − 1, and the Relax operation sets v0.π = vk−1. Each vertex on the cycle has a non-null v.π value, which implies each has a finite v.d value. Consequently, δ(s, v) ≤ v.d < ∞ for each such vertex. It follows from the no-path property that each vertex is reachable from s. Consider the moment just prior to Relax((vk−1, v0), w). Although we don’t know the order in which the various vi.d values were established, we reason that, for any 1 ≤ i ≤ k − 1, the last update of vi.d set vi.d = vi−1.d + w(vi−1, vi). Moreoever, if vi−1.d changed subsequently, it decreased. So, at the moment
17
SLIDE 18 under consideration, vi.d ≥ vi−1.d + w(vi−1, vi), for all 1 ≤ i ≤ k − 1 v0.d > vk−1.d + w(vk−1, v0), the second equation following because Relax((vk−1, v0), w) changes v0.π to close the cycle and therefore we must have v0.d > vk−1.d+w(vk−1, v0) at the moment under consideration just before that relaxation. Summing these k equations gives
k−1
vi.d > vk−1.d + w(vk−1, v0) +
k−1
[vi−1.d + w(vi−1, vi)] = vk−1.d + w(vk−1, v0) +
k−2
[vi.d + w(vi, vi+1)] =
k−1
vi.d + k−2
w(vi, vi+1)
0 > k−2
w(vi, vi+1)
The last inequality establishes a negative cycle in the graph, which contradicts
- ur hypothesis. We conclude that Gπ is acyclic.
18
SLIDE 19 (b) We next show that Vπ contains all vertices in V that are reachable from s. We have s ∈ Vπ by definition. For v reachable from s, but v = s, we require v.π = null for membership in Vπ. But the no-path property ensures that v.d < ∞ for all vertices reachable from
- s. As v.π is set in the same relaxation that updates v.d, we must have v.π =
null. We conclude that Gπ is an acyclic graph that contains all vertices reachable from s. It remains to show that Gπ is a tree. This purpose is achieved if we can show that, for each v ∈ Vπ, there is a unique path through the edges Eπ from s to v. Suppose, there are two paths from s to a vertex v in Gπ. There must exists a pair of distinct vertices, such as x and y in the sketch, with x on one path and y on the other. Let u be the vertex where the paths diverge and z be the vertex where they rejoin. u = s is possible, as is z = v. In this scenario, we have z.π equal to both the predecessor leading back to x and to the predecessor leading back to y, a contradiction. We conclude that Gπ is an acyclic graph, containing all vertices reachable from s, with a unique path to any v ∈ Vπ. Therefore Gπ is a tree rooted at s.
19
SLIDE 20 (c) The final necessity is to show that Gπ is a shortest-path tree rooted at s. Let p = (s = v0, v1, . . . , vk = v) be the unique path from s to v in Gπ. We have vi.d = δ(s, vi) vi.d ≥ vi−1.d + w(vi−1, vi), for 1 ≤ i ≤ k. The second equation holds because the edge (vi−1, vi) ∈ Eπ was relaxed at some point, and vi−1.d could only have decreased thereafter. It follows that w(vi−1, vi) ≤ vi.d − vi−1.d = δ(s, vi) − δ(s, vi−1) w(p) =
k
w(vi−1, vi) ≤
k
[δ(s, vi) − δ(s, vi−1)] = δ(s, vk) − δ(s, v0) = δ(s, vk) − δ(s, s) = δ(s, vk) = δ(s, v). As w(p′) ≥ δ(s, v) for any path p′ from s to v through E, including those that use only edges in Eπ, we conclude that w(p) = δ(s, v). That is p is a shortest path from s v.
20
SLIDE 21 Application: Viability check on difference constraints. Problem: Given n real variables, x1, x2, . . . , xn and a set of m constaints, each having the form xj − xi ≤ bij, determine if there exists an assignment of values to the variables that satisfies all of the constraints. For example, for n = 5 and m = 8, the following describes a set of difference constraints. 1 −1 1 0 −1 1 0 −1 −1 1 −1 1 0 −1 1 0 −1 1 0 −1 1 x1 x2 x3 x4 x5 ≤ −1 1 5 4 −1 −3 −3 ⇔ x1 − x2 ≤ b21 = 0 x1 − x5 ≤ b51 = −1 x2 − x5 ≤ b52 = 1 x3 − x1 ≤ b13 = 5 x4 − x1 ≤ b14 = 4 x4 − x3 ≤ b34 = −1 x5 − x3 ≤ b35 = −3 x5 − x4 ≤ b45 = −3 If a solution exists, many solutions exist via the following lemma. Lemma 24.8: Suppose vector x = (x1, . . . , xn) is a solution to the problem Ax ≤ b
- f difference constraints. Then, for any constant d, x′ = (x1 + d, x2 + d, . . . , xn + d)
is also a solution. Proof: Let u be a vector with ones in all components. That is, u = (1, 1, . . . , 1). Then x′ = x + du. Note that Au = 0 since each row of the product is the sum of n − 2 zeros, a plus one, and a minus one. Consequently, Ax′ = A(x + du) = Ax + d(Au) = Ax + 0 = Ax ≤ b.
21
SLIDE 22
We associate a graph, called the constraint graph, with this problem as follows. Let V = {v0, v1, v2, . . . , vn} be the vertex set. We have a vertex for each of the variables x1, x2, . . . , xn, plus a vertex v0. Let E = {(vi, vj) : xj − xi ≤ bij is a constraint} ∪ {(v0, vi) : 1 ≤ i ≤ n}. That is, we have an edge for each difference constraint, plus an edge from v0 to each vi, 1 ≤ i ≤ n, for a total of m + n edges. The edge weights are calculated as follows. (a) Edges w(v0, vi) = 0 for 1 ≤ i ≤ n. (b) If xj − xi ≤ bk is a constraint, then w(vi, vj) = bij. The graph below corresponds to the difference constraints of the example above. x1 − x2 ≤ b21 = 0 x1 − x5 ≤ b51 = −1 x2 − x5 ≤ b52 = 1 x3 − x1 ≤ b13 = 5 x4 − x1 ≤ b14 = 4 x4 − x3 ≤ b34 = −1 x5 − x3 ≤ b35 = −3 x5 − x4 ≤ b45 = −3
22
SLIDE 23
Theorem 24.9: Given a system Ax ≤ b of difference constraints, suppose A is an m × n matrix. Let G = (V, E) be the corresponding constraint graph, in which |V | = n + 1 and |E| = m + n. Then, if G contains no negative-weight cycles, xi = δ(v0, vi), for 1 ≤ i ≤ n is a solution to the system. If B contains a negative cycle, then there is no solution to the system. Proof: Suppose G contains no negative cycles and consider constraint, xj −xi ≤ bij. Then bij = w(vi, vj) in the constraint graph. δ(v0, vj) ≤ δ(v0, vi) + δ(vi, vj), by the triangle inequality δ(v0, vj) ≤ δ(v0, vi) + w(vi, vj), since weight w(vi, vj) ≥ δ(vi, vj) xj − xi = δ(v0, vj) − δ(v0, vi) ≤ w(vi, vj) = bij. We conclude that (xi = δ(v0, vi), . . . , xn = δ(v0, vn)) is a solution.
23
SLIDE 24 Now, suppose that G contains a negative cycle, say (vj1, vj2, . . . , vjk), where jk = j1. Since v0 has no incoming edges, v0 cannot be a vertex in this cycle. Consequently, all edges in the cycle have weights derived from the difference constraints. For example, w(vj1, vj2) = bj1,j2, which is associated with difference constraint xj2 − xj1 ≤ bj1,j2. Now, for purposes of deriving a contradiction, suppose a solution x1, . . . , xn exists. For the vertices associated with the cycle, we have xj2 − xj1 ≤ bj1,j2 = w(vj1, vj2) xj3 − xj2 ≤ bj2,j3 = w(vj2, vj3) . . . . . . xjk−1 − xjk−2 ≤ bjk−2,jk−1 = w(vjk−2, vjk−1) xj1 − xjk−1 ≤ bjk−1,j1 = w(vjk−1, vj1). Adding these inequalities gives 0 ≤ k−2
w(vji, vji+1)
Where C is the weight sum around the supposedly negative cycle. We conclude that no solution exists.
24
SLIDE 25 Bellman-Ford solutions. Suppose we have a system Ax ≤ b of m difference constraints, where x = (x1, . . . , xn). (a) One solution simply runs Bellman-Ford on the constraint graph, using v0 as the
- source. If the algorithm returns false, indicating a negative cycle, we conclude
that the system has no solution. Otherwise, xi = δ(v0, vi), for 1 ≤ i ≤ n, is a
- solution. Since Bellman-Ford is O(V E), this approach is O((n + 1)(m + n) =
O(n2 + nm). (b) Examining the constraint graph, we note that each δ(v0, vi) ≤ 0, since a path
- f length zero is available from v0 to each of the remaining vertices. Hence, we
can modify the initialization routine to start the v.d values at zero, rather than infinity. Bellman-Ford(G = (V, E), w : E → R, s ∈ V ) { Initialize(G, s); { for (i = 1 to V − 1) for (u, v) ∈ E Relax((u, v), w); for (u, v) ∈ E if v.d > u.d + w(u, v) return false; return true; } Initialize(G = (V, E), s) { for (v ∈ V ) { v.d = 0; ← change v.π = null; } s.d = 0 } Relax((u, v), w) { if v.d > u.d + w(u, v) { v.d = u.d + w(u, v); v.π = u; } } We now observe that the Relax routine changes only v.d values appearing on the destination end of the edge. As v0 has no incoming edges, its v.d value is never
- adjusted. Consequently, we can dispense with v0 and the edges connecting it
to the rest of the graph. The algorithm will compute in the same manner as if the entire graph were present. The input to Bellman-Ford is then the constraint graph with v0 and all of its incident edges removed. The source vertex can be chosen arbitrarily. As the reduced graph now has n vertices and m edges, the running time is O(mn).
25