Approximating APSP without Scaling: Equivalence of Approximate - - PowerPoint PPT Presentation

approximating apsp without scaling equivalence of
SMART_READER_LITE
LIVE PREVIEW

Approximating APSP without Scaling: Equivalence of Approximate - - PowerPoint PPT Presentation

Approximating APSP without Scaling: Equivalence of Approximate Min-Plus and Exact Min-Max Karl Bringmann, Marvin Knnemann, Karol Wgrzycki Max Planck Institute Saarbrcken, University of Warsaw STOC 2019 All-Pairs Shortest Path Problem


slide-1
SLIDE 1

Approximating APSP without Scaling: Equivalence of Approximate Min-Plus and Exact Min-Max

Karl Bringmann, Marvin Künnemann, Karol Węgrzycki

Max Planck Institute Saarbrücken, University of Warsaw

STOC 2019

slide-2
SLIDE 2

All-Pairs Shortest Path

Problem APSP: Given a directed graph G with positive edge weights in {1, . . . , W}. Compute shortest path distance between all pairs of vertices.

1 3 1 3 1 1 3 2 3 1

Algorithms: O

  • n3

[Floyd’62,Warshall’62],

n3/2Ω(√log n) [Williams’14] APSP-hypothesis: ∀δ > 0: APSP has no O

  • n3−δ

algorithm (even for W = poly(n)). [Vassilevska-Williams, Williams ’10]

slide-3
SLIDE 3

All-Pairs Shortest Path

Problem APSP: Given a directed graph G with positive edge weights in {1, . . . , W}. Compute shortest path distance between all pairs of vertices.

1 3 1 3 1 1 3 2 3 1 1 1 1 2

Algorithms: O

  • n3

[Floyd’62,Warshall’62],

n3/2Ω(√log n) [Williams’14] APSP-hypothesis: ∀δ > 0: APSP has no O

  • n3−δ

algorithm (even for W = poly(n)). [Vassilevska-Williams, Williams ’10]

slide-4
SLIDE 4

Approximate APSP

(1 + ε) approximate APSP: Compute (1 + ε)-approximation for all pairwise distances Algorithm: O nω

ε log W

  • [Zwick’02]

This yields the same upper bound for approximate graph characteristics, such as: Diameter, Radius, Median, Minimum Weight Triangles, Minimum Weight Cycle, . . . This is state-of-the-art for: ◮ directed graphs for any constant ε > 0, ◮ undirected graphs for any ε ∈ (0, 1).

O (nω) ≤ O

  • n2.373

is the running time of fast matrix multiplication.

slide-5
SLIDE 5

Zwick’s O nω

ε log W

  • Algorithm, part I

MinPlusProduct: Given A, B ∈ Rn×n. Their (min, +)-product is matrix C ∈ Rn×n with Ci,j = mink∈[n]Ai,k + Bk,j. Algorithms: O

  • n3

,

  • O (Wnω)
slide-6
SLIDE 6

Zwick’s O nω

ε log W

  • Algorithm, part I

MinPlusProduct: Given A, B ∈ Rn×n. Their (min, +)-product is matrix C ∈ Rn×n with Ci,j = mink∈[n]Ai,k + Bk,j. Algorithms: O

  • n3

,

  • O (Wnω)

Theorem

[Zwick’02]

If approximate MinPlusProduct in time O (nc/ε) then approximate APSP in time

  • O (nc/ε). (c is some constant)

Sketch of the reduction: Given adjacency matrix A

  • 1. Add self-loops with cost 0,
  • 2. Square ⌈log n⌉ times using approximate MinPlusProduct
  • 3. D := A2⌈log n⌉
  • 4. Then Di,j is a distance between i and j.
slide-7
SLIDE 7

Zwick’s O nω

ε log W

  • Algorithm, part II

MinPlusProduct: Given A, B ∈ Rn×n. Their (min, +)-product is matrix C ∈ Rn×n with Ci,j = mink∈[n]Ai,k + Bk,j. Algorithms: O

  • n3

,

  • O (Wnω)

Theorem ((1 + ε)-approximate MinPlusProduct in O nω

ε log W

  • )

[Zwick’02]

For q := 1, 2, 4, . . . , W:

Search for all outputs Ci,j ∈ [q, 2q]

◮ Remove all entries > 2q of A, B,

(by setting to ∞)

◮ Round all entries in A, B to multiples of εq, ◮ Scale all entries by

1 εq,

(hence, W = O(1/ε))

◮ Run exact O (Wnω) algorithm for MinPlusProd.

slide-8
SLIDE 8

Approximate APSP

(1 + ε) approximate APSP: Compute (1 + ε)-approximation for all pairwise distances Algorithm: O nω

ε log W

  • [Zwick’02]

Is this optimal? ◮ Improving dependence on n to nω−0.01 would yield improved algorithm for BMM. ◮ Improving dependence on ε to 1/εo(1) would violate APSP-hypothesis. ◮ What about log W factor? Can we get strongly polynomial time O nω

ε

  • ?

We care because for floating-point numbers log W factors can be large and because of theoretical interest in strongly polynomial algorithms.

slide-9
SLIDE 9

Computational Model

Approximate APSP: O nω

ε log W

  • [Zwick’02]. Can we get

O nω

ε

  • time?

Are we cheating? Input numbers already consist of log W bits. . .

slide-10
SLIDE 10

Computational Model

Approximate APSP: O nω

ε log W

  • [Zwick’02]. Can we get

O nω

ε

  • time?

Are we cheating? Input numbers already consist of log W bits. . . Model for this talk: Number of arithmetic operations on RAM machine

[Zwick’02] uses

O nω

ε log W

  • arithmetic operations;

O nω

ε

  • could be possible.

In the paper we also consider more restrictive models, i.e., bit complexity with log W bit integers ([Zwick’02] runs in O nω

ε log2 W

  • time), bit complexity with floating-point
  • approximation. In principle log W improvement could be possible in all of them.
slide-11
SLIDE 11

Computational Model

Approximate APSP: O nω

ε log W

  • [Zwick’02]. Can we get

O nω

ε

  • time?

Are we cheating? Input numbers already consist of log W bits. . . Model for this talk: Number of arithmetic operations on RAM machine

[Zwick’02] uses

O nω

ε log W

  • arithmetic operations;

O nω

ε

  • could be possible.

In the paper we also consider more restrictive models, i.e., bit complexity with log W bit integers ([Zwick’02] runs in O nω

ε log2 W

  • time), bit complexity with floating-point
  • approximation. In principle log W improvement could be possible in all of them.

Input Format: Floating-Point Representation We approximate each input number by floating point (1 + x) · 2y with O

  • log 1

ε

  • bit

mantissa and O (log log W)-bit exponent. Note, that changing any input integer by a factor (1 + ε) still yields approximation.

slide-12
SLIDE 12

Main Algorithmic Results

Approximate APSP: O nω

ε log W

  • [Zwick’02]. Can we get

O nω

ε

  • time?

Theorem

[Bringmann, Künnemann, W’19]

◮ Computing graph characteristics (e.g., Diameter, Radius, Median,. . . ) in

  • O

ε

  • time.

Note that nω ≤ n2.373 and n

3+ω 2

≤ n2.687.

slide-13
SLIDE 13

Main Algorithmic Results

Approximate APSP: O nω

ε log W

  • [Zwick’02]. Can we get

O nω

ε

  • time?

Theorem

[Bringmann, Künnemann, W’19]

◮ Computing graph characteristics (e.g., Diameter, Radius, Median,. . . ) in

  • O

ε

  • time.

◮ Approximate APSP on undirected graphs in O nω

ε

  • time.

Note that nω ≤ n2.373 and n

3+ω 2

≤ n2.687.

slide-14
SLIDE 14

Main Algorithmic Results

Approximate APSP: O nω

ε log W

  • [Zwick’02]. Can we get

O nω

ε

  • time?

Theorem

[Bringmann, Künnemann, W’19]

◮ Computing graph characteristics (e.g., Diameter, Radius, Median,. . . ) in

  • O

ε

  • time.

◮ Approximate APSP on undirected graphs in O nω

ε

  • time.

◮ Approximate APSP on directed graphs in O

  • n

3+ω 2 /ε

  • time and is equivalent to

exact MinMaxProduct.

Note that nω ≤ n2.373 and n

3+ω 2

≤ n2.687.

slide-15
SLIDE 15

Focus of this talk

Approximate APSP: O nω

ε log W

  • [Zwick’02]. Can we get

O nω

ε

  • time?

Theorem

[Bringmann, Künnemann, W’19]

Approximate APSP on directed graphs in O

  • n

3+ω 2 /ε

  • time and is equivalent to exact

MinMaxProduct. Why should you care? ◮ First strongly polynomial approximation for APSP (subcubic in n), ◮ Equivalence between approximate and exact problem, ◮ Conditional lower bound (under nonstandard conjecture).

slide-16
SLIDE 16

Basic Ingredients

MinPlusProduct: Given A, B ∈ Rn×n. Their (min, +)-product is matrix C ∈ Rn×n with Ci,j = mink∈[n]Ai,k + Bk,j. Ingredient 1 [Zwick’02]: If approximate MinPlusProduct in time O (nc/ε), then approximate APSP in time O (nc/ε) (and vice versa).

slide-17
SLIDE 17

Basic Ingredients

MinPlusProduct: Given A, B ∈ Rn×n. Their (min, +)-product is matrix C ∈ Rn×n with Ci,j = mink∈[n]Ai,k + Bk,j. Ingredient 1 [Zwick’02]: If approximate MinPlusProduct in time O (nc/ε), then approximate APSP in time O (nc/ε) (and vice versa). MinMaxProduct: Given A, B ∈ Rn×n. Their (min, max)-product is matrix C ∈ Rn×n with Ci,j = mink∈[n] max{Ai,k, Bk,j }. Ingredient 2 [Duan,Pettie’09]: MinMaxProduct can be computed in O

  • n

3+ω 2

  • .

MinMaxProduct is equivalent to All-Pairs Bottleneck Paths (APBP) [VVWY’07].

slide-18
SLIDE 18

Basic Ingredients

MinPlusProduct: Given A, B ∈ Rn×n. Their (min, +)-product is matrix C ∈ Rn×n with Ci,j = mink∈[n]Ai,k + Bk,j. Ingredient 1 [Zwick’02]: If approximate MinPlusProduct in time O (nc/ε), then approximate APSP in time O (nc/ε) (and vice versa). MinMaxProduct: Given A, B ∈ Rn×n. Their (min, max)-product is matrix C ∈ Rn×n with Ci,j = mink∈[n] max{Ai,k, Bk,j }. Ingredient 2 [Duan,Pettie’09]: MinMaxProduct can be computed in O

  • n

3+ω 2

  • .

MinMaxProduct is equivalent to All-Pairs Bottleneck Paths (APBP) [VVWY’07]. Goal: Prove equivalence between approximate MinPlusProduct and exact MinMaxProduct

slide-19
SLIDE 19

Overview of previous slide

Apx APSP ≡ Apx MinPlusProd ≡ MinMaxProd ≡ APBP Next Goal: For any c ≥ 2, the following are equivalent: ◮ Approximate MinPlusProd in strongly polynomial O

  • nc

poly(ε)

  • time.

◮ Exact MinMaxProd in O (nc) time.

slide-20
SLIDE 20

Goal 1: If MinMax in time O (nc) then ApxMinPlus in O (nc/ε) Clearly, Ai,k + Bk,j ≈ max{Ai,k, Bk,j} up to a factor 2. In particular, MinPlusProd(A, B) ≈ MinMaxProd(A, B) up to a factor 2. Can we reduce the factor 2 to (1 + ε)?

slide-21
SLIDE 21

Goal 1: If MinMax in time O (nc) then ApxMinPlus in O (nc/ε) Clearly, Ai,k + Bk,j ≈ max{Ai,k, Bk,j} up to a factor 2. In particular, MinPlusProd(A, B) ≈ MinMaxProd(A, B) up to a factor 2. Can we reduce the factor 2 to (1 + ε)? Theorem (Sum-To-Max Covering):

[Bringmann, Künnemann, W’19]

For matrices A, B ∈ Rn×n

≥0

there are matrices A1, . . . , As, B1, . . . , Bs, s = O 1

ε

  • :

Ai,k + Bk,j ≈ min

ℓ∈[s] max{Aℓi,k, Bℓk,j}

up to a factor (1 + ε). Hence, MinPlusProd(A, B) ≈ minℓ∈[s] MinMaxProd(Aℓ, Bℓ) up to a factor (1 + ε). This can be computed in O (snc) = O

  • n

3+ω 2 /ε

  • time.
slide-22
SLIDE 22

Goal 2: If ApxMinPlus in O (nc) for a constant ε > 0 then MinMaxProd in O (nc). If we exponentiate the numbers, their sum gives a good approximation of max: 20a + 20b ≈ 20max{a,b} up to a factor 2.

slide-23
SLIDE 23

Goal 2: If ApxMinPlus in O (nc) for a constant ε > 0 then MinMaxProd in O (nc). If we exponentiate the numbers, their sum gives a good approximation of max: 20a + 20b ≈ 20max{a,b} up to a factor 2.

  • log20 (20a + 20b)
  • = max{a, b}
slide-24
SLIDE 24

Goal 2: If ApxMinPlus in O (nc) for a constant ε > 0 then MinMaxProd in O (nc). If we exponentiate the numbers, their sum gives a good approximation of max: 20a + 20b ≈ 20max{a,b} up to a factor 2.

  • log20 (20a + 20b)
  • = max{a, b}

Reduction: For every entry in a in A, B compute 20a. Then execute approximate MinPlusProd for ε = 1/4 as a blackbox: MinPlusProd(20A, 20B) ≈ 20MinMaxProd(A,B). At the end, take logarithm to get MinMaxProd and round to the closest integer. Numbers got exponentially large? Did we cheat?

slide-25
SLIDE 25

Can we exponentiate for free?

Exact MinMaxProduct Input format: integers in {1, . . . , W}. Bits needed: log W Approximate MinPlusProduct Input format: floating-point numbers (upper bounded by W) with O(log 1

ε)-bit

mantissa and O(log log W)-bit exponent. Bits needed: O(log 1

ε + log log W)

We did not cheat in this format!

slide-26
SLIDE 26

Wrapping up

Theorem (Equivalence):

[Bringmann, Künnemann, W’19]

MinMaxProd Approximate MinPlusProd

  • O (nc)

⇐ =

  • O (nc) for a fixed ε > 0

By exponentiation

  • O (nc)

= ⇒

  • O

nc

ε

  • By Sum-To-Max Covering

for any c ≥ 2. See our poster for a proof of Sum-To-Max Covering!

slide-27
SLIDE 27

Open Problems

◮ Faster MinMaxProduct? Can we rule out O

  • n2.5−δ

? ◮ Can we get faster algorithm for MinMaxConv? Or perhaps rule out O

  • n1.5−δ

◮ Strongly-polynomial dynamic or distributed graph algorithms, e.g., reduce poly(log n, log W) to poly(log n, log log W). ◮ More applications of Sum-To-Max Covering? ◮ More research on strongly-polynomial approximation.

slide-28
SLIDE 28

Conclusion

Theorem

[Bringmann, Künnemann, W’19]

◮ Computing graph characteristics (e.g., Diameter, Radius, Median,. . . ) is in

  • O

ε

  • time.

◮ Approximate APSP on undirected graphs in O nω

ε

  • time.

◮ Approximate APSP on directed graphs in O

  • n

3+ω 2 /ε

  • time and is equivalent to

exact MinMaxProduct. Extra results

[Bringmann, Künnemann, W’19]

Equivalence class between approximate (min, +)-convolution and exact (min, max)-convolution, TreeSparsity,. . .

Thank you!