Lecture 11. Matrix Algorithms and Applications Introduction to - - PowerPoint PPT Presentation

lecture 11 matrix algorithms and applications
SMART_READER_LITE
LIVE PREVIEW

Lecture 11. Matrix Algorithms and Applications Introduction to - - PowerPoint PPT Presentation

Lecture 11. Matrix Algorithms and Applications Introduction to Computer Science, IIIS, Tsinghua Outline } Introduction to divide and conquer } faster matrix multiplication algorithm } Algorithms for matrix operations } LUP decomposition, matrix


slide-1
SLIDE 1

Lecture 11. Matrix Algorithms and Applications

Introduction to Computer Science, IIIS, Tsinghua

slide-2
SLIDE 2

Outline

} Introduction to divide and conquer

} faster matrix multiplication algorithm

} Algorithms for matrix operations

} LUP decomposition, matrix inversion, …

} Applications

} Perfect matching by matrix multiplication } 4-node subgraphs } dominance product / (max,min)-product

slide-3
SLIDE 3

Divide and Conquer

} Break up a problem into two sub-problems } Solve each sub-problem recursively } Combine solution to sub-problems to form solution to

  • riginal problem.
slide-4
SLIDE 4

Integer Multiplication

} Addition: Given two n-digit integers a and b, compute a + b.

} O(n) bit operations.

} Multiplication. Given two n-digit integers a and b, compute a × b.

} Brute force solution: 𝛪(n2) bit operations.

1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 * 1 1 1 1 1 1 1 + 1 1 1 1 1 1 1 1

slide-5
SLIDE 5

Divide-and-Conquer Multiplication

} We can divide the n-bit numbers a and b into two parts:

} where a=a1N+a2, b=b1N+b2. (N=2n/2) } so a×b=(a1N+a2)(b1N+b2)=a1b1N2+(a1b2+a2b1)N+a2b2.

} Suppose we can compute the multiplication of two (n/2)-bit

numbers,

} We need to use it 4 times. } Compute a1b1, a1b2, a2b1, a2b2

a1 a2 b1 b2 a b

slide-6
SLIDE 6

Divide-and-Conquer Multiplication

} We can divide the n-bit numbers a and b into two parts:

} where a=a1N+a2, b=b1N+b2. (N=2n/2) } so a×b=(a1N+a2)(b1N+b2)=a1b1N2+(a1b2+a2b1)N+a2b2.

} Suppose we can compute the multiplication of two (n/2)-bit

numbers,

} We need to use it 4 times. } Compute a1b1, a1b2, a2b1, a2b2

a1 a2 b1 b2 a b

T(n) = 4T n/2

( )

recursive calls

1 2 4 3 4 + Θ(n)

add, shift

1 2 3 ⇒ T(n) = Θ(n2)

How can we improve?

slide-7
SLIDE 7

Divide-and-Conquer Multiplication

} We can divide the n-bit numbers a and b into two parts:

} in fact, a1b1+a1b2+a2b1+a2b2=(a1+a2)(b1+b2). } So if we have already computed a1b1 and a2b2, } a1b2+a2b1=(a1+a2)(b1+b2)-a1b1-a2b2

a1 a2 b1 b2 a b

slide-8
SLIDE 8

Divide-and-Conquer Multiplication

} We can divide the n-bit numbers a and b into two parts:

a1 a2 b1 b2 a b

Compute A=a1b1, B=a2b2, C=(a1+a2)(b1+b2) aŸb=(a1N+a2)(b1N+b2) =a1b1N2+(a1b2+a2b1)N+a2b2 =AŸN2+(C-A-B)N+B

slide-9
SLIDE 9

Divide-and-Conquer Multiplication

} Theorem. [Karatsuba-Ofman, 1962] Can multiply two n-digit

integers in O(n1.585) bit operations.

A=a1b1, B=a2b2, C=(a1+a2)(b1+b2) aŸb=(a1N+a2)(b1N+b2) =a1b1N2+(a1b2+a2b1)N+a2b2 =AŸN2+(C-A-B)N+B

T(n) ≤ T n/2

⎣ ⎦

( ) + T

n/2

⎡ ⎤

( ) + T 1+ n/2

⎡ ⎤

( )

recursive calls

1 2 4 4 4 4 4 4 4 3 4 4 4 4 4 4 4 + Θ(n)

add, subtract, shift

1 2 4 3 4 ⇒ T(n) = O(n

log 2 3 ) = O(n1.585 )

slide-10
SLIDE 10

Karatsuba: Recursion Tree

T(n) = if n =1 3T(n/2) + n

  • therwise

⎧ ⎨ ⎩

n 3(n/2) 9(n/4) 3k (n / 2k) 3 lg n (2) . . . . . . T(n) T(n/2) T(n/4) T(n/4) T(2) T(2) T(2) T(2) T(2) T(2) T(2) T(2) T(n / 2k) T(n/4) T(n/2) T(n/4) T(n/4) T(n/4) T(n/2) T(n/4) T(n/4) T(n/4)

. . . . . .

T(n) = n 3

2

( )

k k=0 log2 n

∑ =

3 2

( )

1+log2 n −1 3 2 −1

= 3nlog2 3 − 2

slide-11
SLIDE 11

Integer Multiplication

} [Karatsuba-Ofman, 1962] O(n1.585)

} [Schönhage, Strassen 1971] O(nŸlog nŸloglog n) } [Fürer 2007] O(nŸlog nŸ2O(log*n))

} log*n is how many times loglog…log n≤1

} Almost in linear time!

slide-12
SLIDE 12

Algebraic Matrix Multiplication ×

=

( )

i j

A a = ( )

i j

B b = ( )

i j

C c =

i j Can be computed naively in O(n3) time.

slide-13
SLIDE 13

Strassen’s n×n algorithm

View each n×n matrix as a 2×2 matrix whose elements are n/2 × n/2 matrices. Apply the 2×2 algorithm recursively. C11 C12 C21 C22 A11 A12 A21 A22 B11 B12 B21 B22

slide-14
SLIDE 14

Multiplying 2×2 matrices

8 multiplications 4 additions

T(n) = 8T(n/2) + O(n2) T(n) = O(nlog8/log2)=O(n3)

C11 = A11B11 + A12B21 C12 = A11B12 + A12B22 C21 = A21B11 + A22B21 C22 = A21B12 + A22B22

slide-15
SLIDE 15

Strassen’s 2×2 algorithm

11 11 11 12 21 12 11 12 12 22 21 21 11 22 21 22 21 12 22 22

C A B A B C A B A B C A B A B C A B A B = + = + = + = +

1 11 22 11 22 2 21 22 11 3 11 12 22 4 22 21 11 5 11 12 22 6 21 11 11 12 7 12 22 21 22

( )( ) ( ) ( ) ( ) ( ) ( )( ) ( )( ) M A A B B M A A B M A B B M A B B M A A B M A A B B M A A B B − − = + + = + = = = + = = + − + −

11 1 4 5 7 12 3 5 21 2 4 22 1 2 3 6

C M M M M C M M C M M C M M M M = + + = = − + − + = + +

7 multiplications 18 additions/subtractions

slide-16
SLIDE 16

11 11 11 12 21 12 11 12 12 22 21 21 11 22 21 22 21 12 22 22

C A B A B C A B A B C A B A B C A B A B = + = + = + = +

1 11 22 11 22 2 21 22 11 3 11 12 22 4 22 21 11 5 11 12 22 6 21 11 11 12 7 12 22 21 22

( )( ) ( ) ( ) ( ) ( ) ( )( ) ( )( ) M A A B B M A A B M A B B M A B B M A A B M A A B B M A A B B − − = + + = + = = = + = = + − + −

11 1 4 5 7 12 3 5 21 2 4 22 1 2 3 6

C M M M M C M M C M M C M M M M = + + = = − + − + = + +

7 multiplications 18 additions/subtractions

T(n) = 7T n/2

( )

recursive calls

1 2 4 3 4 + Θ(n2)

add, subtract

1 2 4 3 4 ⇒ T(n) = Θ(nlog2 7) = O(n2.81)

slide-17
SLIDE 17

Matrix multiplication algorithms

The O(n2.81) bound of Strassen was improved by Coppersmith and Winograd to O(n2.376). The current bound is O(n2.3729). (The algorithms are much more complicated…) We let 2 ≤ 𝜕 < 2.376 be the exponent. Many believe that 𝜕=2+o(1).

slide-18
SLIDE 18

We let 2 ≤ 𝜕 < 2.373 be the exponent of the time

  • complexity. Many believe that 𝜕=2+o(1).

So we can denote the time for matrix multiplication by O(nω)

slide-19
SLIDE 19

Other matrix operations?

} Matrix inversion } Determinant (行列式) } Solve linear system (Ax=b) } Computing eigenvalue/eigenvectors

} (using the fast matrix multiplication algorithm)

slide-20
SLIDE 20

Other matrix operations?

} Matrix inversion } Determinant (行列式) } Solve linear system (Ax=b) } Computing eigenvalue/eigenvectors

} (using the fast matrix multiplication algorithm) 可逆矩阵:invertible / nonsingular 不可逆矩阵:not invertible / singular

slide-21
SLIDE 21

LUP decomposition

} For an n×n matrix A, find L,U,P

, such that PA=LU (A=PLU)

} L is a unit lower-triangular matrix } U is an upper-triangular matrix } P is a permutation matrix (what’s the inverse of P?)

Permutation matrix: every row and every column has 1 element of 1

⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ 1 1 1 1 1 1 1

slide-22
SLIDE 22

LU decomposition

} For an n×n matrix

A, find L, U, such that A=LU

} L is a unit lower-triangular matrix } U is an upper-triangular matrix

} Just the Gaussian elimination a wT v A’ 1 v / a

1 1 … 1

a wT

?

= ×

slide-23
SLIDE 23

LU decomposition

} For an n×n matrix

A, find L, U, such that A=LU

} L is a unit lower-triangular matrix } U is an upper-triangular matrix

} Just the Gaussian elimination a wT v A’ 1 v / a

1 1 … 1

a wT A’-vwT/a

= ×

Recursively perform on A’-vwT/a

slide-24
SLIDE 24

LU decomposition

} For an n×n matrix

A, find L, U, such that A=LU

} L is a unit lower-triangular matrix } U is an upper-triangular matrix

} Just the Gaussian elimination a wT v A’ 1 v / a

1 1 … 1

a wT A’-vwT/a

= ×

Recursively perform on A’-vwT/a What’s the problem with an arbitrary matrix?

slide-25
SLIDE 25

LU decomposition

} For an n×n matrix

A, find L, U, such that A=LU

} L is a unit lower-triangular matrix } U is an upper-triangular matrix

} Just the Gaussian elimination } What if the first element a11=0?

} Change the order of the rows by multiplying a permutation matrix Q

a wT v A’ 1 v / a

1 1 … 1

a wT A’-vwT/a

= ×

slide-26
SLIDE 26

LUP decomposition– Gaussian Elimination

} Elementary matrix: } Gaussian elimination:

} a sequence of elementary matrix E1, E2, …, Ek s.t. A=E1…EkU } How to deal with the permutation matrix

Ti,j?

Left multiply: change rows Right multiply: change columns

slide-27
SLIDE 27

LUP decomposition

a wT v A’ 1 v / a

1 1 … 1

a wT A’-vwT/a

= ×

If we change the order of the 2nd line and the i-th line

B

slide-28
SLIDE 28

LUP decomposition

a wT v A’ 1 v / a

1 1 … 1

a wT A’-vwT/a

= ×

If we change the order of the 2nd line and the i-th line

B

Right multiply: change columns

slide-29
SLIDE 29

LUP decomposition

a wT v A’ 1 v / a

1 1 … 1

a wT A’-vwT/a

= ×

If we change the order of the 2nd line and the i-th line

B

Right multiply: change columns

slide-30
SLIDE 30

LUP decomposition

a wT v A’ 1 v / a

1 1 … 1

a wT A’-vwT/a

= ×

If we change the order of the 2nd line and the i-th line

B

Left multiply: change rows

slide-31
SLIDE 31

LUP decomposition

a wT v A’ 1 v / a

1 1 … 1

a wT A’-vwT/a

= ×

If we change the order of the 2nd line and the i-th line

B

Still a lower triangle matrix!

slide-32
SLIDE 32

LUP decomposition-summary

} LU-decomposition A=LU } When the element aii=0

} Change the order of the rows by multiplying a permutation matrix Q

} Finally A=E1…EkU, for a sequence of elementary matrix } So finally we can write A=PLU

} L is a unit lower-triangular matrix } U is an upper-triangular matrix } P is a permutation matrix

=

slide-33
SLIDE 33

LUP-decomposition

} After the LUP-decomposition A=PLU, every operation

becomes easy:

} find inversion } compute determinant

det(A)=det(P)det(L)det(U)

} Solve linear system (Ax=b)

(PLU)x=b ó {Py=b; Lz=y; Ux=z}, takes O(n2) time.

slide-34
SLIDE 34

LUP-decomposition

} After the LUP-decomposition, every operation becomes

easy:

} find inversion } compute determinant } Solve linear system (Ax=b)

} How can we find the inverse of a triangle matrix?

} better than O(n3)

slide-35
SLIDE 35

LUP-decomposition

} After the LUP-decomposition, every operation becomes

easy:

} find inversion } compute determinant } Solve linear system (Ax=b)

} How can we find the inverse of a triangle matrix?

} So we can recursively compute A-1 and B-1, (also triangle) } Time: T(n)=2T(n/2)+Θ(nω)=Θ(nω)

(the same as matrix multiplication) A C B ⎡ ⎣ ⎢ ⎤ ⎦ ⎥

−1

= A−1 −A−1CB−1 B−1 ⎡ ⎣ ⎢ ⎤ ⎦ ⎥

slide-36
SLIDE 36

} What’s the running time for Gaussian elimination?

slide-37
SLIDE 37

} What’s the running time for Gaussian elimination? } O(n3) } How can we do the LUP-decomposition faster?

a wT v A’ 1 v / a

1 1 … 1

a wT A’-vwT/a

= ×

slide-38
SLIDE 38

How to accelerate LUP decomposition?

} Gaussian elimination:

} Eliminate elements by order: } O(n3) time.

slide-39
SLIDE 39

How to accelerate LUP decomposition?

} The Hopcroft-Bunch (1974) version:

} Eliminate elements by order:

slide-40
SLIDE 40

Hopcroft-Bunch algorithm

U X V Y

slide-41
SLIDE 41

Hopcroft-Bunch algorithm

U X Y

  • VU-1X
slide-42
SLIDE 42

Hopcroft-Bunch algorithm

  • U becomes upper triangle!
  • Find U-1 in O(nω) time

X V Y U

slide-43
SLIDE 43

Hopcroft-Bunch algorithm

X Y

  • VU-1X

U

  • Block elimination!
slide-44
SLIDE 44

Hopcroft-Bunch algorithm

X V Y U

slide-45
SLIDE 45

Hopcroft-Bunch algorithm

X Y

  • VU-1X

U

slide-46
SLIDE 46

Hopcroft-Bunch algorithm

  • U is always upper-triangle, so it’s easy to find its inverse
  • We just need to pick a non-zero element at k-th row

X Y

U

V

slide-47
SLIDE 47

Hopcroft-Bunch algorithm

X Y

  • VU-1X

U

Recursively run this on Y

  • VU-1X
slide-48
SLIDE 48

Hopcroft-Bunch algorithm

X V Y U 1 1 1 1 1 1 1 1

VU-1

Every time we just need to multiply a lower-triangle matrix:

slide-49
SLIDE 49

Running Time:

} M(n,p): time for multiplying an n×n and an n×p matrix } I(n): time for inverse an n×n matrix } Time:

} Inversions:

= −

= + + +

n i i i I

n n I I n I n

log 1 1)

2 ( 2 ) 2 / ( ... ) 2 ( 4 ) 1 ( 2

slide-50
SLIDE 50

Running Time:

} M(n,p): time for multiplying an n×n and an n×p matrix } I(n): time for inverse an n×n matrix } Time:

} Inversions: } Multiplications:

every time we need to compute Y-VU-1X

= −

= + + +

n i i i I

n n I I n I n

log 1 1)

2 ( 2 ) 2 / ( ... ) 2 ( 4 ) 1 ( 2

= −

n i i i

n M n

log 1 1

) , 2 ( 2

slide-51
SLIDE 51

Running Time:

} M(n,p): time for multiplying an n×n and an n×p matrix } I(n): time for inverse an n×n matrix } Time:

} Inversions: } Multiplications:

every time we need to compute Y-VU-1X

= −

= + + +

n i i i I

n n I I n I n

log 1 1)

2 ( 2 ) 2 / ( ... ) 2 ( 4 ) 1 ( 2

= −

n i i i

n M n

log 1 1

) , 2 ( 2

What’s M(n,p) ?

slide-52
SLIDE 52

Running Time:

} M(n,p): O(pn𝜕-1)

} a trivial method, there is good ways

} I(n): O(n𝜕) } Time:

} Inversions: } Multiplications:

every time we need to compute Y-VU-1X Thus, the total time is still O(n𝜕).

= −

= + + +

n i i i I

n n I I n I n

log 1 1)

2 ( 2 ) 2 / ( ... ) 2 ( 4 ) 1 ( 2

= −

n i i i

n M n

log 1 1

) , 2 ( 2

slide-53
SLIDE 53

Summary

} A reduces to B: we can use an algorithm for B to solve A

(AèB)

} We have shown:

} so LUP-decomposition and matrix inversion has O(nω) time algorithm.

Matrix Multiplication LUP-decomposition Matrix inversion

slide-54
SLIDE 54

Summary

} A reduces to B: we can use an algorithm for B to solve A

(AèB)

} We have shown: } if we can show this reduction, then the three operations has the

same time complexity!

} there is an oracle for matrix inversion, compute A×B?

Matrix Multiplication LUP-decomposition Matrix inversion

slide-55
SLIDE 55

Multiplication reduces to Inversion

} Very tricky result: } To compute AB,

I A I B I ⎛ ⎝ ⎜ ⎜ ⎜ ⎞ ⎠ ⎟ ⎟ ⎟

−1

= I −A AB I −B I ⎛ ⎝ ⎜ ⎜ ⎜ ⎞ ⎠ ⎟ ⎟ ⎟

slide-56
SLIDE 56

Summary

} A reduces to B: we can use an algorithm for B to solve A

(AèB)

} All operation can be solved in O(nω) time.

Matrix Multiplication LUP-decomposition Matrix inversion Compute det(A) Solve Ax=b

slide-57
SLIDE 57

Eigenvalues/Eigenvector?

} However, there is no such algorithm by FMM for

computing eigenvalues.

slide-58
SLIDE 58

Example of applications

} Matching M:

} A set of vertex-disjoint edges } Matched vertices: the vertices associated with an edge in M } Free vertices: unmatched vertices

} Perfect matching in bipartite graph:

} No free vertices

slide-59
SLIDE 59

Another example

} Perfect matching in bipartite graph:

} Problem: decide whether there is a perfect matching in G

slide-60
SLIDE 60

Adjacency matrix

} The adjacency matrix A of graph G=(V, E):

} Ai,j=1 iff (i,j)∈E

1 1 1 1 1 1 1 1 1

slide-61
SLIDE 61

} Perfect matching in bipartite graph:

} Whether there is a perfect matching in G

} By determinant of adjacency matrix 1 1 1 1 1 1 1 1 1

slide-62
SLIDE 62

Another example

} Perfect matching in bipartite graph:

} Whether there is a perfect matching in G

} By determinant of adjacency matrix

} σ is the permutation of {1,…,n} } sgn(σ) is +1 or -1

a11 a12 a13 a14 a21 a22 a23 a24 a31 a32 a33 a34 a41 a42 a43 a44

slide-63
SLIDE 63

Another example

} Perfect matching in bipartite graph:

} Whether there is a perfect matching in G

} By determinant of adjacency matrix

} σ is the permutation of {1,…,n} } sgn(σ) is +1 or -1

a11 a12 a13 a14 a21 a22 a23 a24 a31 a32 a33 a34 a41 a42 a43 a44 Permutation means every row and column contains exactly one element

slide-64
SLIDE 64

} A perfect matching in a bipartite graph with n vertices

each side can also be seen as a permutation of {1,…,n}

slide-65
SLIDE 65

} A perfect matching in a bipartite graph with n vertices

each side can also be seen as a permutation of {1,…,n}

} Construct an n×n matrix X:

} xij is a variable is there is an edge from Ai to Bj } xij=0 if there is no edge from Ai to Bj

x11 x12 x13 x21 x23 x32 x34 x43 x44

slide-66
SLIDE 66

} A perfect matching in a bipartite graph with n vertices

each side can also be seen as a permutation of {1,…,n}

} Construct an n×n matrix X:

} From the definition of determinant, det(X)≡0 if there is no

perfect matching.

x11 x12 x13 x21 x23 x32 x34 x43 x44

slide-67
SLIDE 67

} The determinant is a polynomial of degree n } We can randomly pick each nonzero xij from {1,…,2n}, and

compute det(X)

} if there is no perfect matching, det(X) is always 0 } otherwise, det(X) is not 0 unless we are unlucky enough to pick the

root of the polynomial det(X).

} Computing det(X) takes O(nω) time.

5 3 8 1 4 7 3 2 6

slide-68
SLIDE 68

} The determinant is a polynomial of degree n } We can randomly pick each nonzero xij from {1,…,2n}, and

compute det(X)

} As in the polynomial identity testing } Computing det(X) takes O(nω) time.

} If there is no perfect matching, determinant is always 0. } If there are perfect matchings, Pr[det(X)≠0]≥1/2

} Proof in the “Theory of Computation” course.

} So we can repeat k time to make error probability ≤1/2k.

slide-69
SLIDE 69

Graph algorithms by FMM

} Transitive Closure: O(nω) } All-pair shortest path (APSP):

} undirected unweighted: O(nω) } directed unweighted: O(n𝜈) } (1+ε)-approximate with weights [0…M]: O(n𝜕 log M/ε)

} Maximum Matching: O(nω) } Dominance Product: O(n(3+ω)/2) } All-pair bottleneck path (APBP):

} vertex-capacitated graphs: O(n𝜈) } edge-capacitated graphs: O(n(3+ω)/2)

slide-70
SLIDE 70

Graph algorithms by FMM

} Transitive Closure: O(nω) } All-pair shortest path (APSP):

} undirected unweighted: O(nω) } directed unweighted: O(n𝜈) } (1+ε)-approximate with weights [0…M]: O(n𝜕 log M/ε)

} Maximum Matching: O(nω) } Dominance Product: O(n(3+ω)/2) } All-pair bottleneck path (APBP):

} vertex-capacitated graphs: O(n𝜈) } edge-capacitated graphs: O(n(3+ω)/2)

ω≈2.373 3

(3+ω)/2≈2.687

𝜈≈2.531

slide-71
SLIDE 71

Simple problem

} In a undirected graph G, check whether G contains a

triangle.

} trivial algorithm: checking every triple of vertices (u,v,w) } time: O(n3)

slide-72
SLIDE 72

Simple problem

} In a undirected graph G, check whether G contains a

triangle.

} By FMM?

slide-73
SLIDE 73

Adjacency matrix of G=(V,E)

} Ai,j=1 iff (i,j)∈E } Consider A2:

} (A2)i,j=∑kAi,k·Ak,j

×

=

( )

i j

A a = ( )

i j

B b = ( )

i j

C c =

slide-74
SLIDE 74

Adjacency matrix of G=(V,E)

} Ai,j=1 iff (i,j)∈E } Consider A2:

} (A2)i,j=∑kAi,k·Ak,j } (A2)i,j>0 equivalent to ∃k such that Ai,k=Ak,j =1, } that is (i,k), (k,j) ∈E

slide-75
SLIDE 75

Adjacency matrix of G=(V,E)

} Ai,j=1 iff (i,j)∈E } Consider A2:

} (A2)i,j=∑kAi,k·Ak,j } (A2)i,j>0 equivalent to ∃k such that Ai,k=Ak,j =1, } that is (i,k), (k,j) ∈E

} So G has a triangle: ∃(i,j) such that Ai,j=1 and (A2)i,j≥1

1 2 3 4 5 6 7

slide-76
SLIDE 76

What about Quadrilateral?

} Checking whether a graph contains a 4-cycle:

1 2 3 4 5 6 7

slide-77
SLIDE 77

What about Quadrilateral?

} Checking whether a graph contains a 4-cycle: } There must be (i,j) such that (A2)i,j≥2:

} (A2)2,6=2 sinceA2,7=A7,6 =1, A2,5=A5,6 =1 } Thus time bound is also O(n𝜕)

1 2 3 4 5 6 7

slide-78
SLIDE 78

Problem solved?

} What about the 4-node subgraph is exactly a 4 cycle?

} subgraph on the 4 vertices has 4 edges

1 2 3 4 5 6 7 1 2 3 4 5 6 7

✅ ❌

slide-79
SLIDE 79

Problem solved?

} What about the 4-node subgraph is exactly a 4 cycle?

} subgraph on the 4 vertices has 4 edges

1 2 3 4 5 6 7 1 2 3 4 5 6 7

✅ ❌

slide-80
SLIDE 80

Randomized algorithm

} [Vassilevska, Wang, Williams,

Yu 2014] all 4-node subgraphs except “clique” and “independent set” can be found in O(n𝜕) time with high probability.

slide-81
SLIDE 81

Ideas

} Consider } What if we have a “diamond” } What if we have a “clique”

(u,v)2E

✓(A2)u,v 2 ◆ ✓ ◆

u v

slide-82
SLIDE 82

Ideas

} Consider } if we have a “diamond”

} can only be:

} if we have a “clique”

} any edge can be (u,v) } so it is counted 6 times

(u,v)2E

✓(A2)u,v 2 ◆ ✓ ◆

u v

slide-83
SLIDE 83

Ideas

} So

= 6×#( )+#( )

} But it’s still hard to separate them

(u,v)2E

✓(A2)u,v 2 ◆ ✓ ◆

slide-84
SLIDE 84

Ideas

} So

X= = 6×#( )+#( )

} If X mod 6 ≠ 0, then of course the graph contains a

diamond

} If X mod 6 = 0, we randomly delete some vertices

} with some probability, the number of diamonds will not be a

multiple of 6, then return true

} we can run this procedure many times, if the graph contains no

diamond, it always return false

(u,v)2E

✓(A2)u,v 2 ◆ ✓ ◆

slide-85
SLIDE 85

What about 4-cycle?

slide-86
SLIDE 86

What about 4-cycle?

} Consider } If we have a diamond… } If we have a 4-cycle…

2

(u,v)62E

✓(A2)u,v 2 ◆

slide-87
SLIDE 87

What about 4-cycle?

} Consider } If we have a diamond, it is counted once!! } If we have a 4-cycle, it is counted twice

2

(u,v)62E

✓(A2)u,v 2 ◆

slide-88
SLIDE 88

Ideas

} So

X= = 6×#( )+#( ) Y= = 2×#( )+#( )

} So by X-Y=6×(#clique)-2×(#4-cycle), using similar idea

we can check whether a graph contains a 4-cycle.

(u,v)2E

✓(A2)u,v 2 ◆ ✓ ◆

2

(u,v)62E

✓(A2)u,v 2 ◆

slide-89
SLIDE 89

Dominance Product (♦)

} Original product

} C[i,j]=∑k A[i,k]·B[k,j]

} Dominance Product

} M[i,j]=(AsB)[i,j]=|{k | A[i,k]≤B[k,j]}|

slide-90
SLIDE 90

Dominance Product (♦)

} Dominance Product

} M[i,j]=(AsB)[i,j]=|{k | A[i,k]≤B[k,j]}|

M[i,j]=3 (Dominance Product)

slide-91
SLIDE 91

Dominance Product (♦)

} Dominance Product

} M[i,j]=(AsB)[i,j]=|{k | A[i,k]≤B[k,j]}|

} From Matoušek 1991, Dominance Product for any two

n×n matrices can be computed in O(n(3+ω)/2) time

} From V.

Vassilevska et al 2007, for two sparse matrices, with m1 and m2 valid elements respectively, the running time will be

) (

2 / ) 1 ( 2 1 − ω

n m m O

slide-92
SLIDE 92

Algorithm for Dominance Product?

} Dominance Product

} M[i,j]=(AsB)[i,j]=|{k | A[i,k]≤B[k,j]}|

slide-93
SLIDE 93

Algorithm for Dominance Product?

} Dominance Product

} M[i,j]=(AsB)[i,j]=|{k | A[i,k]≤B[k,j]}|

} A[i,1] compare with B[1,j].

slide-94
SLIDE 94

Algorithm for Dominance Product?

} Dominance Product

} M[i,j]=(AsB)[i,j]=|{k | A[i,k]≤B[k,j]}|

} A[i,1] compare with B[1,j]. } Thus, elements of the i-th column of A are only compared with

i-th row of B.

slide-95
SLIDE 95

Algorithm for Dominance Product?

} Dominance Product

} M[i,j]=(AsB)[i,j]=|{k | A[i,k]≤B[k,j]}|

} A[i,1] compare with B[1,j]. } Thus, elements of the i-th column of A are only compared with

i-th row of B.

slide-96
SLIDE 96

Algorithm for Dominance Product?

} Dominance Product

} M[i,j]=(AsB)[i,j]=|{k | A[i,k]≤B[k,j]}| } A[i,1] compare with B[1,j]. } Thus, elements of the i-th column of A are only compared

with i-th row of B.

} Thus, define the set Lk={A[i,k]}∪{B[k,j]}, and sort Lk } Partition the sorted list of each Lk into r parts:

} each with 2n/r elements

Increasing

Lk

slide-97
SLIDE 97

} Construct:

} Ab[i,k]=

1 if A[i,k] is in b-th part of Lk

  • therwise

} Bb[k,j]=

1 if B[k,j] is in b-th part of Lk

  • therwise

if i<j, Ai·Bj is always included in the dominance product

slide-98
SLIDE 98

} For each b=1…r, compute the Boolean product

Cb=Ab·(Bb+1+Bb+2+...+Br), then compute ∑bCb.

} Since the corresponding elements in b-th part of A are ≤ the

(b+1)-th part of B

} What’s left?

M[i,j]=(AsB)[i,j]=|{k | A[i,k]≤B[k,j]}|

slide-99
SLIDE 99

} For each b=1…r, compute the Boolean product

Cb=Ab·(Bb+1+Bb+2+...+Br), then compute ∑bCb.

} Then we only need to compare elements in the same

parts of the partition

} Running Time?

M[i,j]=(AsB)[i,j]=|{k | A[i,k]≤B[k,j]}|

slide-100
SLIDE 100

} For each b=1…r, compute the Boolean product

Cb=Ab·(Bb+1+Bb+2+...+Br), then compute ∑bCb.

} Time: O(r·n𝜕)

} Then we only need to compare elements in the same parts of

the partition

} Time: for each part, O(n·(n/r)2), in total: O(rn·(n/r)2)=O(n3/r) } To balance, we let r=n(3-𝜕)/2, total time: O(n(3+𝜕)/2)

M[i,j]=(AsB)[i,j]=|{k | A[i,k]≤B[k,j]}|

slide-101
SLIDE 101

} For each b=1…r, compute the Boolean product

Cb=Ab·(Bb+1+Bb+2+...+Br), then compute ∑bCb.

} Time: O(r·n𝜕)

} Then we only need to compare elements in the same parts of

the partition

} Time: for each part, O(n·(n/r)2), in total: O(rn·(n/r)2)=O(n3/r) } To balance, we let r=n(3-𝜕)/2, total time: O(n(3+𝜕)/2)

} What if A only contains m valid elements?

} other elements of A are +∞

M[i,j]=(AsB)[i,j]=|{k | A[i,k]≤B[k,j]}|

slide-102
SLIDE 102

Sparse dominance product

} What if A only contains m valid elements?

} other elements of A are +∞

} For each b=1…r, compute the Boolean product

Cb=Ab·(Bb+1+Bb+2+...+Br), then compute ∑bCb.

} Time: O(r·n𝜕)

} Then we only need to compare elements in the same parts of

the partition

} Every element in A needs to compare with O(n/r) elements of Bb for

some b, in total: O(mn/r+n2)

} total time: O(m1/2·n(1+𝜕)/2)

slide-103
SLIDE 103

Bottleneck paths (Widest paths)

} Here “Bottleneck Paths” between s and t is the path with

maximum flow from s to t.

} For a graph G=(V,E), and the capacity function for every

edge: w: E→R, the maximum flow is defined as:

) ( min max ) , (

:

e w t s f

p e t s path p ∈ →

=

slide-104
SLIDE 104

Max-min Product

} Max-Min Product C=A○B

} C[i,j]=maxk min{A[i,k], B[k,j]} } which is just the bottleneck path in 3-layer graph:

A B

slide-105
SLIDE 105

Max-min product and All-pair Bottleneck Paths

} Since a path can at most be composed of n edges, so the

flows of the all-pair bottleneck paths is equal to

} G○G ○… ○G = Gn = ((G2)2…)2

n times square about log n times

} In fact, the time complexity for APBP is the same as the

max-min product.

slide-106
SLIDE 106

Another way to think of max-min

} Since in the result,

} C[i,j]=max1≤k≤n{min{A[i,k], B[k,j]}} } So if A[i,k]≤B[k,j], min{A[i,k], B[k,j]}= A[i,k], otherwise it is B[k,j]

} Thus, we just need to compute:

slide-107
SLIDE 107

Another way to think of max-min

} Since in the result,

} C[i,j]=max1≤k≤n{min{A[i,k], B[k,j]}} } So if A[i,k]≤B[k,j], min{A[i,k], B[k,j]}= A[i,k], otherwise it is B[k,j]

} Thus, we just need to compute:

} D[i,j]=max{A[i,k]| A[i,k] ≤ B[k,j]} } D’[i,j]=max{B[k,j]| A[i,k] > B[k,j]} } And, C[i,j]=max{D[i,j], D’[i,j]}

} We only consider the algorithm for D[i,j] here.

slide-108
SLIDE 108

A Example To Show Dominance and Max- Min Product

} Consider the following example:

A B

slide-109
SLIDE 109

A Example To Show Dominance and Max- Min Product

M[i,j]=3 (Dominance Product)

D[i,j]=7

D[i,j]=max{A[i,k]| A[i,k] ≤ B[k,j]}

slide-110
SLIDE 110

Algorithm for max-min product

} Remember we want to compute: } D[i,j]=max{A[i,k]| A[i,k] ≤ B[k,j]} } [Vassilevska,

Williams, Yuster 2007] We sort the elements of A, and partition every row into s parts

slide-111
SLIDE 111

Algorithm for max-min product

} Remember we want to compute: } D[i,j]=max{A[i,k]| A[i,k] ≤ B[k,j]} } [Vassilevska,

Williams, Yuster 2007] We sort the elements of A, and partition every row into s parts, for b=1,…,s:

} Then Ab[i,j]=

1 if A[i,j] is in b-th part row i

  • therwise
slide-112
SLIDE 112

Partition A

} The matrix A is then partitioned into submatrices

A1,A2,A3,…As

Compute the dominance products A1♦B, A2♦B, A3♦B,…, As♦B

slide-113
SLIDE 113

} Just as in this graph: For any (i,j), we intend to find: Di,j=max{Ai,k|Ai,k≤Bk,j}

First, we find which part Di,j is in.

slide-114
SLIDE 114

From the Dominance Products:

} If (Ap♦B)i,j>0,

} Then there exists a Ai,k in Ap such that Ai,k≤Bk,j, so Di,j is in Ap.

slide-115
SLIDE 115

From the Dominance Products:

} If (Ap♦B)i,j=0 and (Ap-1♦B)i,j>0

} Then there exists a Ai,k in Ap-1 such that Ai,k≤Bk,j, so Di,j is in Ap-

1.

slide-116
SLIDE 116

From the Dominance Products:

} So in general we just find the maximum part q in which

(Aq♦B)i,j>0, which means Di,j is in part q

} Then check the elements of part q in line i of A one by

  • ne.
slide-117
SLIDE 117

Algorithm for max-min product

} Remember we want to compute: } D[i,j]=max{A[i,k]| A[i,k] ≤ B[k,j]} } [Vassilevska,

Williams, Yuster 2007] We sort the elements of A, and partition every row into s parts,

} Then Ab[i,j]=

1 if A[i,j] is in b-th part row i

  • therwise

} Compute the dominance product AbsB for b=1…s } Then for each pair of (i,j), find the maximum b such that

AbsB[i,j]>0, then D[i,j] is in the b-th part of row i of A

slide-118
SLIDE 118

Algorithm for max-min product

} Remember we want to compute: } D[i,j]=max{A[i,k]| A[i,k] ≤ B[k,j]} } [Vassilevska,

Williams, Yuster 2007] We sort the elements of A, and partition every row into s parts

} Compute the dominance product AbsB for b=1…s

} Time: O(m1/2·n(1+𝜕)/2) for each sparse dominance product where

m=n2/s

} Then for each pair of (i,j), find the maximum b such that

AbsB[i,j]>0, then D[i,j] is in the b-th part of row i of A

} Time: O(n/s) for each i,j

} T

  • tal time: O(n2+𝜕/3)
slide-119
SLIDE 119

Previous Results

ω≈2.376 3

(3+ω)/2 2+ω/3

Algorithms for bottleneck paths Running Time Source Edge- weighted Single-source O(m+nlogn) Single-source, single- destination O(m log*n) Gabow & Tarjan All-pair (trivial) O(mn) All-pair (2007) O(n2+ω/3)≈O(n2.792) Vassilevska,William s and Yuster All-pair (2009) O(n(3+ω)/2)≈O(n2.688) Duan, Pettie Vertex- weighted All-pair(2007) O(nμ)≈O(n2.575) Shapira, Yuster and Zwick

slide-120
SLIDE 120

Summary

} Algorithms for matrix operations of bound O(n2.373):

} matrix multiplication } inversion } LUP-decomposition } solving linear equations } determinant } but not eigenvalues!

} Applications:

} Perfect matching in bipartite graphs } 4-node subgraphs } dominance product / (max,min)-product