SLIDE 1
Approximating APSP without Scaling: Equivalence of Approximate Min-Plus and Exact Min-Max
Karl Bringmann, Marvin Künnemann, Karol Węgrzycki
Max Planck Institute Saarbrücken, University of Warsaw
STOC 2019
SLIDE 2 All-Pairs Shortest Path
Problem APSP: Given a directed graph G with positive edge weights in {1, . . . , W}. Compute shortest path distance between all pairs of vertices.
1 3 1 3 1 1 3 2 3 1
Algorithms: O
[Floyd’62,Warshall’62],
n3/2Ω(√log n) [Williams’14] APSP-hypothesis: ∀δ > 0: APSP has no O
algorithm (even for W = poly(n)). [Vassilevska-Williams, Williams ’10]
SLIDE 3 All-Pairs Shortest Path
Problem APSP: Given a directed graph G with positive edge weights in {1, . . . , W}. Compute shortest path distance between all pairs of vertices.
1 3 1 3 1 1 3 2 3 1 1 1 1 2
Algorithms: O
[Floyd’62,Warshall’62],
n3/2Ω(√log n) [Williams’14] APSP-hypothesis: ∀δ > 0: APSP has no O
algorithm (even for W = poly(n)). [Vassilevska-Williams, Williams ’10]
SLIDE 4 Approximate APSP
(1 + ε) approximate APSP: Compute (1 + ε)-approximation for all pairwise distances Algorithm: O nω
ε log W
This yields the same upper bound for approximate graph characteristics, such as: Diameter, Radius, Median, Minimum Weight Triangles, Minimum Weight Cycle, . . . This is state-of-the-art for: ◮ directed graphs for any constant ε > 0, ◮ undirected graphs for any ε ∈ (0, 1).
O (nω) ≤ O
is the running time of fast matrix multiplication.
SLIDE 5 Zwick’s O nω
ε log W
MinPlusProduct: Given A, B ∈ Rn×n. Their (min, +)-product is matrix C ∈ Rn×n with Ci,j = mink∈[n]Ai,k + Bk,j. Algorithms: O
,
SLIDE 6 Zwick’s O nω
ε log W
MinPlusProduct: Given A, B ∈ Rn×n. Their (min, +)-product is matrix C ∈ Rn×n with Ci,j = mink∈[n]Ai,k + Bk,j. Algorithms: O
,
Theorem
[Zwick’02]
If approximate MinPlusProduct in time O (nc/ε) then approximate APSP in time
- O (nc/ε). (c is some constant)
Sketch of the reduction: Given adjacency matrix A
- 1. Add self-loops with cost 0,
- 2. Square ⌈log n⌉ times using approximate MinPlusProduct
- 3. D := A2⌈log n⌉
- 4. Then Di,j is a distance between i and j.
SLIDE 7 Zwick’s O nω
ε log W
MinPlusProduct: Given A, B ∈ Rn×n. Their (min, +)-product is matrix C ∈ Rn×n with Ci,j = mink∈[n]Ai,k + Bk,j. Algorithms: O
,
Theorem ((1 + ε)-approximate MinPlusProduct in O nω
ε log W
[Zwick’02]
For q := 1, 2, 4, . . . , W:
Search for all outputs Ci,j ∈ [q, 2q]
◮ Remove all entries > 2q of A, B,
(by setting to ∞)
◮ Round all entries in A, B to multiples of εq, ◮ Scale all entries by
1 εq,
(hence, W = O(1/ε))
◮ Run exact O (Wnω) algorithm for MinPlusProd.
SLIDE 8 Approximate APSP
(1 + ε) approximate APSP: Compute (1 + ε)-approximation for all pairwise distances Algorithm: O nω
ε log W
Is this optimal? ◮ Improving dependence on n to nω−0.01 would yield improved algorithm for BMM. ◮ Improving dependence on ε to 1/εo(1) would violate APSP-hypothesis. ◮ What about log W factor? Can we get strongly polynomial time O nω
ε
We care because for floating-point numbers log W factors can be large and because of theoretical interest in strongly polynomial algorithms.
SLIDE 9 Computational Model
Approximate APSP: O nω
ε log W
O nω
ε
Are we cheating? Input numbers already consist of log W bits. . .
SLIDE 10 Computational Model
Approximate APSP: O nω
ε log W
O nω
ε
Are we cheating? Input numbers already consist of log W bits. . . Model for this talk: Number of arithmetic operations on RAM machine
[Zwick’02] uses
O nω
ε log W
O nω
ε
In the paper we also consider more restrictive models, i.e., bit complexity with log W bit integers ([Zwick’02] runs in O nω
ε log2 W
- time), bit complexity with floating-point
- approximation. In principle log W improvement could be possible in all of them.
SLIDE 11 Computational Model
Approximate APSP: O nω
ε log W
O nω
ε
Are we cheating? Input numbers already consist of log W bits. . . Model for this talk: Number of arithmetic operations on RAM machine
[Zwick’02] uses
O nω
ε log W
O nω
ε
In the paper we also consider more restrictive models, i.e., bit complexity with log W bit integers ([Zwick’02] runs in O nω
ε log2 W
- time), bit complexity with floating-point
- approximation. In principle log W improvement could be possible in all of them.
Input Format: Floating-Point Representation We approximate each input number by floating point (1 + x) · 2y with O
ε
mantissa and O (log log W)-bit exponent. Note, that changing any input integer by a factor (1 + ε) still yields approximation.
SLIDE 12 Main Algorithmic Results
Approximate APSP: O nω
ε log W
O nω
ε
Theorem
[Bringmann, Künnemann, W’19]
◮ Computing graph characteristics (e.g., Diameter, Radius, Median,. . . ) in
nω
ε
Note that nω ≤ n2.373 and n
3+ω 2
≤ n2.687.
SLIDE 13 Main Algorithmic Results
Approximate APSP: O nω
ε log W
O nω
ε
Theorem
[Bringmann, Künnemann, W’19]
◮ Computing graph characteristics (e.g., Diameter, Radius, Median,. . . ) in
nω
ε
◮ Approximate APSP on undirected graphs in O nω
ε
Note that nω ≤ n2.373 and n
3+ω 2
≤ n2.687.
SLIDE 14 Main Algorithmic Results
Approximate APSP: O nω
ε log W
O nω
ε
Theorem
[Bringmann, Künnemann, W’19]
◮ Computing graph characteristics (e.g., Diameter, Radius, Median,. . . ) in
nω
ε
◮ Approximate APSP on undirected graphs in O nω
ε
◮ Approximate APSP on directed graphs in O
3+ω 2 /ε
- time and is equivalent to
exact MinMaxProduct.
Note that nω ≤ n2.373 and n
3+ω 2
≤ n2.687.
SLIDE 15 Focus of this talk
Approximate APSP: O nω
ε log W
O nω
ε
Theorem
[Bringmann, Künnemann, W’19]
Approximate APSP on directed graphs in O
3+ω 2 /ε
- time and is equivalent to exact
MinMaxProduct. Why should you care? ◮ First strongly polynomial approximation for APSP (subcubic in n), ◮ Equivalence between approximate and exact problem, ◮ Conditional lower bound (under nonstandard conjecture).
SLIDE 16
Basic Ingredients
MinPlusProduct: Given A, B ∈ Rn×n. Their (min, +)-product is matrix C ∈ Rn×n with Ci,j = mink∈[n]Ai,k + Bk,j. Ingredient 1 [Zwick’02]: If approximate MinPlusProduct in time O (nc/ε), then approximate APSP in time O (nc/ε) (and vice versa).
SLIDE 17 Basic Ingredients
MinPlusProduct: Given A, B ∈ Rn×n. Their (min, +)-product is matrix C ∈ Rn×n with Ci,j = mink∈[n]Ai,k + Bk,j. Ingredient 1 [Zwick’02]: If approximate MinPlusProduct in time O (nc/ε), then approximate APSP in time O (nc/ε) (and vice versa). MinMaxProduct: Given A, B ∈ Rn×n. Their (min, max)-product is matrix C ∈ Rn×n with Ci,j = mink∈[n] max{Ai,k, Bk,j }. Ingredient 2 [Duan,Pettie’09]: MinMaxProduct can be computed in O
3+ω 2
MinMaxProduct is equivalent to All-Pairs Bottleneck Paths (APBP) [VVWY’07].
SLIDE 18 Basic Ingredients
MinPlusProduct: Given A, B ∈ Rn×n. Their (min, +)-product is matrix C ∈ Rn×n with Ci,j = mink∈[n]Ai,k + Bk,j. Ingredient 1 [Zwick’02]: If approximate MinPlusProduct in time O (nc/ε), then approximate APSP in time O (nc/ε) (and vice versa). MinMaxProduct: Given A, B ∈ Rn×n. Their (min, max)-product is matrix C ∈ Rn×n with Ci,j = mink∈[n] max{Ai,k, Bk,j }. Ingredient 2 [Duan,Pettie’09]: MinMaxProduct can be computed in O
3+ω 2
MinMaxProduct is equivalent to All-Pairs Bottleneck Paths (APBP) [VVWY’07]. Goal: Prove equivalence between approximate MinPlusProduct and exact MinMaxProduct
SLIDE 19 Overview of previous slide
Apx APSP ≡ Apx MinPlusProd ≡ MinMaxProd ≡ APBP Next Goal: For any c ≥ 2, the following are equivalent: ◮ Approximate MinPlusProd in strongly polynomial O
poly(ε)
◮ Exact MinMaxProd in O (nc) time.
SLIDE 20
Goal 1: If MinMax in time O (nc) then ApxMinPlus in O (nc/ε) Clearly, Ai,k + Bk,j ≈ max{Ai,k, Bk,j} up to a factor 2. In particular, MinPlusProd(A, B) ≈ MinMaxProd(A, B) up to a factor 2. Can we reduce the factor 2 to (1 + ε)?
SLIDE 21 Goal 1: If MinMax in time O (nc) then ApxMinPlus in O (nc/ε) Clearly, Ai,k + Bk,j ≈ max{Ai,k, Bk,j} up to a factor 2. In particular, MinPlusProd(A, B) ≈ MinMaxProd(A, B) up to a factor 2. Can we reduce the factor 2 to (1 + ε)? Theorem (Sum-To-Max Covering):
[Bringmann, Künnemann, W’19]
For matrices A, B ∈ Rn×n
≥0
there are matrices A1, . . . , As, B1, . . . , Bs, s = O 1
ε
Ai,k + Bk,j ≈ min
ℓ∈[s] max{Aℓi,k, Bℓk,j}
up to a factor (1 + ε). Hence, MinPlusProd(A, B) ≈ minℓ∈[s] MinMaxProd(Aℓ, Bℓ) up to a factor (1 + ε). This can be computed in O (snc) = O
3+ω 2 /ε
SLIDE 22
Goal 2: If ApxMinPlus in O (nc) for a constant ε > 0 then MinMaxProd in O (nc). If we exponentiate the numbers, their sum gives a good approximation of max: 20a + 20b ≈ 20max{a,b} up to a factor 2.
SLIDE 23 Goal 2: If ApxMinPlus in O (nc) for a constant ε > 0 then MinMaxProd in O (nc). If we exponentiate the numbers, their sum gives a good approximation of max: 20a + 20b ≈ 20max{a,b} up to a factor 2.
- log20 (20a + 20b)
- = max{a, b}
SLIDE 24 Goal 2: If ApxMinPlus in O (nc) for a constant ε > 0 then MinMaxProd in O (nc). If we exponentiate the numbers, their sum gives a good approximation of max: 20a + 20b ≈ 20max{a,b} up to a factor 2.
- log20 (20a + 20b)
- = max{a, b}
Reduction: For every entry in a in A, B compute 20a. Then execute approximate MinPlusProd for ε = 1/4 as a blackbox: MinPlusProd(20A, 20B) ≈ 20MinMaxProd(A,B). At the end, take logarithm to get MinMaxProd and round to the closest integer. Numbers got exponentially large? Did we cheat?
SLIDE 25
Can we exponentiate for free?
Exact MinMaxProduct Input format: integers in {1, . . . , W}. Bits needed: log W Approximate MinPlusProduct Input format: floating-point numbers (upper bounded by W) with O(log 1
ε)-bit
mantissa and O(log log W)-bit exponent. Bits needed: O(log 1
ε + log log W)
We did not cheat in this format!
SLIDE 26 Wrapping up
Theorem (Equivalence):
[Bringmann, Künnemann, W’19]
MinMaxProd Approximate MinPlusProd
⇐ =
By exponentiation
= ⇒
nc
ε
for any c ≥ 2. See our poster for a proof of Sum-To-Max Covering!
SLIDE 27 Open Problems
◮ Faster MinMaxProduct? Can we rule out O
? ◮ Can we get faster algorithm for MinMaxConv? Or perhaps rule out O
◮ Strongly-polynomial dynamic or distributed graph algorithms, e.g., reduce poly(log n, log W) to poly(log n, log log W). ◮ More applications of Sum-To-Max Covering? ◮ More research on strongly-polynomial approximation.
SLIDE 28 Conclusion
Theorem
[Bringmann, Künnemann, W’19]
◮ Computing graph characteristics (e.g., Diameter, Radius, Median,. . . ) is in
nω
ε
◮ Approximate APSP on undirected graphs in O nω
ε
◮ Approximate APSP on directed graphs in O
3+ω 2 /ε
- time and is equivalent to
exact MinMaxProduct. Extra results
[Bringmann, Künnemann, W’19]
Equivalence class between approximate (min, +)-convolution and exact (min, max)-convolution, TreeSparsity,. . .
Thank you!