On the Log Rank Conjecture Thomas Rothvoss UW Seattle Current - - PowerPoint PPT Presentation

on the log rank conjecture
SMART_READER_LITE
LIVE PREVIEW

On the Log Rank Conjecture Thomas Rothvoss UW Seattle Current - - PowerPoint PPT Presentation

On the Log Rank Conjecture Thomas Rothvoss UW Seattle Current Topics Seminar Communication complexity Setting: Function f : X Y { 0 , 1 } Alice Bob Communication complexity Setting: Function f : X Y { 0 , 1 }


slide-1
SLIDE 1

On the Log Rank Conjecture

Thomas Rothvoss

UW Seattle Current Topics Seminar

slide-2
SLIDE 2

Communication complexity

Setting:

◮ Function f : X × Y → {0, 1}

Alice Bob

slide-3
SLIDE 3

Communication complexity

Setting:

◮ Function f : X × Y → {0, 1} ◮ Players Alice and Bob agree apriori on a deterministic

communication protocoll Alice Bob

slide-4
SLIDE 4

Communication complexity

Setting:

◮ Function f : X × Y → {0, 1} ◮ Players Alice and Bob agree apriori on a deterministic

communication protocoll

◮ Alice receives x ∈ X, Bob receives y ∈ X

Alice x ∈ X Bob y ∈ Y

slide-5
SLIDE 5

Communication complexity

Setting:

◮ Function f : X × Y → {0, 1} ◮ Players Alice and Bob agree apriori on a deterministic

communication protocoll

◮ Alice receives x ∈ X, Bob receives y ∈ X ◮ They exchange messages to compute f(x, y)

Alice x ∈ X Bob y ∈ Y 1

slide-6
SLIDE 6

Communication complexity

Setting:

◮ Function f : X × Y → {0, 1} ◮ Players Alice and Bob agree apriori on a deterministic

communication protocoll

◮ Alice receives x ∈ X, Bob receives y ∈ X ◮ They exchange messages to compute f(x, y)

Alice x ∈ X Bob y ∈ Y 1 1, 0

slide-7
SLIDE 7

Communication complexity

Setting:

◮ Function f : X × Y → {0, 1} ◮ Players Alice and Bob agree apriori on a deterministic

communication protocoll

◮ Alice receives x ∈ X, Bob receives y ∈ X ◮ They exchange messages to compute f(x, y)

Alice x ∈ X Bob y ∈ Y 1 1, 0

slide-8
SLIDE 8

Communication complexity

Setting:

◮ Function f : X × Y → {0, 1} ◮ Players Alice and Bob agree apriori on a deterministic

communication protocoll

◮ Alice receives x ∈ X, Bob receives y ∈ X ◮ They exchange messages to compute f(x, y)

Alice x ∈ X Bob y ∈ Y 1 1, 0 1 both know f(x, y)

slide-9
SLIDE 9

Communication complexity

Setting:

◮ Function f : X × Y → {0, 1} ◮ Players Alice and Bob agree apriori on a deterministic

communication protocoll

◮ Alice receives x ∈ X, Bob receives y ∈ X ◮ They exchange messages to compute f(x, y)

Alice x ∈ X Bob y ∈ Y 1 1, 0 1 both know f(x, y) CC(f) = min

protocoll max x∈X,y∈Y {bits to compute f(x, y)}

slide-10
SLIDE 10

Communication complexity (2)

Example:

◮ Input for Alice: x ∈ {0, 1}n ◮ Input for Bob: y ∈ {0, 1}n

f(x, y) = x1 + . . . + xn + y1 + . . . + yn mod 2

slide-11
SLIDE 11

Communication complexity (2)

Example:

◮ Input for Alice: x ∈ {0, 1}n ◮ Input for Bob: y ∈ {0, 1}n

f(x, y) = x1 + . . . + xn + y1 + . . . + yn mod 2 A 1-bit protocoll: (1) Alice send x1 + . . . + xn mod 2 to Bob. (2) Bob then knows the answer.

slide-12
SLIDE 12

Communication complexity (2)

Example:

◮ Input for Alice: x ∈ {0, 1}n ◮ Input for Bob: y ∈ {0, 1}n

f(x, y) = x1 + . . . + xn + y1 + . . . + yn mod 2 A 1-bit protocoll: (1) Alice send x1 + . . . + xn mod 2 to Bob. (2) Bob then knows the answer. even x

  • dd x
  • dd y

even y 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1

slide-13
SLIDE 13

Communication complexity (2)

Example:

◮ Function

EQ : {0, 1}n×{0, 1}n → {0, 1} EQ(x, y) =

  • 1

x = y

  • therwise
slide-14
SLIDE 14

Communication complexity (2)

Example:

◮ Function

EQ : {0, 1}n×{0, 1}n → {0, 1} EQ(x, y) =

  • 1

x = y

  • therwise

◮ Complexity theory 101: CC(EQ) = n

slide-15
SLIDE 15

Communication complexity (2)

Example:

◮ Function

EQ : {0, 1}n×{0, 1}n → {0, 1} EQ(x, y) =

  • 1

x = y

  • therwise

◮ Complexity theory 101: CC(EQ) = n

1 1 1 1 X Y

slide-16
SLIDE 16

Communication complexity (3)

decision tree: 1 1 1

1 1 1 1 0 0 0 0 0 1 1 1 1 1 1 1 1 1

slide-17
SLIDE 17

Communication complexity (3)

decision tree: 1 1 1

1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 1 1 1 1 1 1 1 1 1

slide-18
SLIDE 18

Communication complexity (3)

decision tree: 1 1 1

1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 1 1 1 1 1 1 1 1 1

v Observations:

◮ For a leave v of tree, Rv := {(x, y) : protocoll ends in v} is

a monochromatic rectangle

slide-19
SLIDE 19

Communication complexity (3)

decision tree: 1 1 1

1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 1 1 1 1 1 1 1 1 1

v Observations:

◮ For a leave v of tree, Rv := {(x, y) : protocoll ends in v} is

a monochromatic rectangle

◮ A protocol exchanging k bits ⇒ rank(f) ≤ 2k

slide-20
SLIDE 20

Communication complexity (3)

decision tree: 1 1 1

1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 1 1 1 1 1 1 1 1 1

v Observations:

◮ For a leave v of tree, Rv := {(x, y) : protocoll ends in v} is

a monochromatic rectangle

◮ A protocol exchanging k bits ⇒ rank(f) ≤ 2k ◮ CC(f) ≥ log rank(f)

slide-21
SLIDE 21

Relation to rank

◮ CC(f) ≥ log rank(f) ◮ CC(f) ≤ rank(f).

slide-22
SLIDE 22

Relation to rank

◮ CC(f) ≥ log rank(f) ◮ CC(f) ≤ rank(f).

Conjecture (Lov´ asz & Saks ’88)

CC(f) ≤ (log rank(f))O(1).

slide-23
SLIDE 23

Relation to rank

◮ CC(f) ≥ log rank(f) ◮ CC(f) ≤ rank(f).

Conjecture (Lov´ asz & Saks ’88)

CC(f) ≤ (log rank(f))O(1).

Theorem (Lovett ’14)

CC(f) ≤ ˜ O(

  • rank(f)).
slide-24
SLIDE 24

Relation to rank

◮ CC(f) ≥ log rank(f) ◮ CC(f) ≤ rank(f).

Conjecture (Lov´ asz & Saks ’88)

CC(f) ≤ (log rank(f))O(1).

Theorem (Lovett ’14)

CC(f) ≤ ˜ O(

  • rank(f)).

◮ Here: A much shorter and direct proof by me.

slide-25
SLIDE 25

The technical main result

◮ It suffices to show

Lemma

Any 0/1 matrix A

A

1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 1 1 1 1 0 1 1 1 0 1 1 1 1 0 1 0

slide-26
SLIDE 26

The technical main result

◮ It suffices to show

Lemma

Any 0/1 matrix A has a monochromatic submatrix R of size |R| ≥ 2− ˜

O(√ rank(A)) · |A|

A

1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 1 1 1 1 0 1 1 1 0 1 1 1 1 0 1 0

slide-27
SLIDE 27

The technical main result

◮ It suffices to show

Lemma

Any 0/1 matrix A has a almost monochromatic submatrix R

  • f size

|R| ≥ 2− ˜

O(√ rank(A)) · |A| ◮ Almost means #zeroes #ones ≤ 1 8·rank(A)

A

1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 1 1 1 1 0 1 1 1 0 1 1 1 1 0 1 0

slide-28
SLIDE 28

Thoughts about rank

Let A be a 0/1 matrix of rank r. By definition, Aij = ui, vj with ui, vj ∈ Rr

slide-29
SLIDE 29

Thoughts about rank

Let A be a 0/1 matrix of rank r. By definition, Aij = ui, vj with ui, vj ∈ Rr ui vj

slide-30
SLIDE 30

Thoughts about rank

Let A be a 0/1 matrix of rank r. By definition, Aij = ui, vj with ui, vj ∈ Rr ui vj

◮ For any regular matrix T: u′ i := Tui & v′ j := (T −1)Tvj

  • u′

i, v′ j

  • = ui, vj
slide-31
SLIDE 31

Thoughts about rank

Let A be a 0/1 matrix of rank r. By definition, Aij = ui, vj with ui, vj ∈ Rr ui vj

◮ For any regular matrix T: u′ i := Tui & v′ j := (T −1)Tvj

  • u′

i, v′ j

  • = ui, vj

Lemma

Vectors can be chosen so that ui2, vj2 ≤ r1/4 ∀i, j

slide-32
SLIDE 32

John’s theorem

John’s Theorem

For any symmetric convex body K ⊆ Rn, there is an ellipsoid E so that E ⊆ K ⊆ √n · E. K

slide-33
SLIDE 33

John’s theorem

John’s Theorem

For any symmetric convex body K ⊆ Rn, there is an ellipsoid E so that E ⊆ K ⊆ √n · E. K E

slide-34
SLIDE 34

John’s theorem

John’s Theorem

For any symmetric convex body K ⊆ Rn, there is an ellipsoid E so that E ⊆ K ⊆ √n · E. K E √n · E

◮ T : Rn → Rn linear map ⇒ T(ball) is an ellipsoid

slide-35
SLIDE 35

John’s Theorem (2)

Proof: K

slide-36
SLIDE 36

John’s Theorem (2)

Proof: K

◮ Suppose the maximum volume ellipsoid in K is a unit

ball.

slide-37
SLIDE 37

John’s Theorem (2)

Proof: K x

◮ Suppose the maximum volume ellipsoid in K is a unit

ball.

◮ Suppose some point x ∈ K has x2 > √n

slide-38
SLIDE 38

John’s Theorem (2)

Proof: K x

◮ Suppose the maximum volume ellipsoid in K is a unit

ball.

◮ Suppose some point x ∈ K has x2 > √n

slide-39
SLIDE 39

John’s Theorem (2)

Proof: K x E

◮ Suppose the maximum volume ellipsoid in K is a unit

ball.

◮ Suppose some point x ∈ K has x2 > √n ◮ Stretch ball along x; shrink orthogonal

slide-40
SLIDE 40

John’s Theorem (2)

Proof: K x E

◮ Suppose the maximum volume ellipsoid in K is a unit

ball.

◮ Suppose some point x ∈ K has x2 > √n ◮ Stretch ball along x; shrink orthogonal ◮ vol(E) > vol(ball)

slide-41
SLIDE 41

Rescaling vectors

◮ Given r-dim. vectors with ui, vj ∈ {0, 1}

slide-42
SLIDE 42

Rescaling vectors

K ui −ui

◮ Given r-dim. vectors with ui, vj ∈ {0, 1} ◮ Choose K := conv{±ui | i row index}

slide-43
SLIDE 43

Rescaling vectors

K r−1/4B r1/4B ui −ui

◮ Given r-dim. vectors with ui, vj ∈ {0, 1} ◮ Choose K := conv{±ui | i row index} ◮ After linear transformation: 1 r1/4B ⊆ K ⊆ r1/4B

slide-44
SLIDE 44

Rescaling vectors

K vj r−1/4B r1/4B ui −ui

◮ Given r-dim. vectors with ui, vj ∈ {0, 1} ◮ Choose K := conv{±ui | i row index} ◮ After linear transformation: 1 r1/4B ⊆ K ⊆ r1/4B ◮ Claim: vj2 ≤ r1/4

slide-45
SLIDE 45

Rescaling vectors

K vj r−1/4B r1/4B ui

◮ Given r-dim. vectors with ui, vj ∈ {0, 1} ◮ Choose K := conv{±ui | i row index} ◮ After linear transformation: 1 r1/4B ⊆ K ⊆ r1/4B ◮ Claim: vj2 ≤ r1/4

1 ≥ max

i

| ui, vj |

slide-46
SLIDE 46

Rescaling vectors

K vj r−1/4B r1/4B w ui

◮ Given r-dim. vectors with ui, vj ∈ {0, 1} ◮ Choose K := conv{±ui | i row index} ◮ After linear transformation: 1 r1/4B ⊆ K ⊆ r1/4B ◮ Claim: vj2 ≤ r1/4

1 ≥ max

i

| ui, vj | ≥ w, vj

slide-47
SLIDE 47

Rescaling vectors

K vj r−1/4B r1/4B w ui

◮ Given r-dim. vectors with ui, vj ∈ {0, 1} ◮ Choose K := conv{±ui | i row index} ◮ After linear transformation: 1 r1/4B ⊆ K ⊆ r1/4B ◮ Claim: vj2 ≤ r1/4

1 ≥ max

i

| ui, vj | ≥ w, vj = 1 r1/4vj2

slide-48
SLIDE 48

Back to our problem A

1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 1 1 1 1 0 1 1 1 0 1 1 1 1 0 1 0 ui vj ui vj

◮ There are unit vectors ui, vj so that

ui, vj =

  • if Aij = 0

1 √r

if Aij = 1

slide-49
SLIDE 49

Back to our problem A

1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 1 1 1 1 0 1 1 1 0 1 1 1 1 0 1 0 ui vj ui vj

◮ There are unit vectors ui, vj so that

ui, vj =

  • if Aij = 0

1 √r

if Aij = 1

slide-50
SLIDE 50

Sheppard’s formula

Sheppard’s formula

For unit vectors u, v ∈ R2, take a random direction g. Then Pr[g, u ≥ 0 and g, v ≥ 0] = 1 2

  • 1 − arccos(u, v)

π

  • u

v 0.1 0.2 0.3 0.4 0.5 0.5 1.0 −0.5 −1.0 α

1 2(1 − arccos(α) π

)

slide-51
SLIDE 51

Sheppard’s formula

Sheppard’s formula

For unit vectors u, v ∈ R2, take a random direction g. Then Pr[g, u ≥ 0 and g, v ≥ 0] = 1 2

  • 1 − arccos(u, v)

π

  • g

u v 0.1 0.2 0.3 0.4 0.5 0.5 1.0 −0.5 −1.0 α

1 2(1 − arccos(α) π

)

slide-52
SLIDE 52

Sheppard’s formula

Sheppard’s formula

For unit vectors u, v ∈ R2, take a random direction g. Then Pr[g, u ≥ 0 and g, v ≥ 0] ≈ 1 4 + const · u, v g u v 0.1 0.2 0.3 0.4 0.5 0.5 1.0 −0.5 −1.0 α

1 2(1 − arccos(α) π

)

slide-53
SLIDE 53

Finding an almost monochr. submatrix

◮ Suppose A is 0/1 matrix with

#ones(A) ≥ #zeroes(A)

A

1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 1 1 1 1 0 1 1 1 0 1 1 1 1 0 1 0

slide-54
SLIDE 54

Finding an almost monochr. submatrix

◮ Suppose A is 0/1 matrix with

#ones(A) ≥ #zeroes(A)

◮ Sample random Gaussian g

B = {i : g, ui ≥ 0} × {j : g, vj ≥ 0}

A

1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 1 1 1 1 0 1 1 1 0 1 1 1 1 0 1 0 B

slide-55
SLIDE 55

Finding an almost monochr. submatrix

◮ Suppose A is 0/1 matrix with

#ones(A) ≥ #zeroes(A)

◮ Sample random Gaussian g

B = {i : g, ui ≥ 0} × {j : g, vj ≥ 0}

A

1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 1 1 1 1 0 1 1 1 0 1 1 1 1 0 1 0 B

◮ We know

E #zeroes in B #ones in B

1 4 1 4 + 1 √r

= 1 − Θ( 1

√r)

E[fract of entries in B] ≈ 1 4

slide-56
SLIDE 56

Finding an almost monochr. submatrix

◮ Suppose A is 0/1 matrix with

#ones(A) ≥ #zeroes(A)

◮ Sample random T := Θ(√r log(r))

Gaussians g1, . . . , gT B = {i : gt, ui ≥ 0 ∀t ∈ [T]} × {j : gt, vj ≥ 0 ∀t ∈ [T]}

A

1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 1 1 1 1 0 1 1 1 0 1 1 1 1 0 1 0 B

◮ We know

E #zeroes in B #ones in B

1 4 1 4 + 1 √r

= 1 − Θ( 1

√r)

E[fract of entries in B] ≈ 1 4

slide-57
SLIDE 57

Finding an almost monochr. submatrix

◮ Suppose A is 0/1 matrix with

#ones(A) ≥ #zeroes(A)

◮ Sample random T := Θ(√r log(r))

Gaussians g1, . . . , gT B = {i : gt, ui ≥ 0 ∀t ∈ [T]} × {j : gt, vj ≥ 0 ∀t ∈ [T]}

A

1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 1 1 1 1 0 1 1 1 0 1 1 1 1 0 1 0 B

◮ We know

E #zeroes in B #ones in B

1 4 1 4 + 1 √r

= 1 − Θ( 1

√r)

E[fract of entries in B] ≈ 1 4

slide-58
SLIDE 58

Finding an almost monochr. submatrix

◮ Suppose A is 0/1 matrix with

#ones(A) ≥ #zeroes(A)

◮ Sample random T := Θ(√r log(r))

Gaussians g1, . . . , gT B = {i : gt, ui ≥ 0 ∀t ∈ [T]} × {j : gt, vj ≥ 0 ∀t ∈ [T]}

A

1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 1 1 1 1 0 1 1 1 0 1 1 1 1 0 1 0 B

◮ We know

E #zeroes in B #ones in B

  • 1

4 1 4 + 1 √r

T = 1 8r E[fract of entries in B] ≈ 1 4 T = 2−Θ(√r log r)

slide-59
SLIDE 59

The end

◮ The following is equivalent to log-rank conjecture:

Conjecture

Any rank-r 0/1-matrix A has a submatrix with

◮ |B| ≥ 2−(log(r))O(1) ◮ B is monochromatic (except of a 1 8r-fraction of entries).

slide-60
SLIDE 60

The end

◮ The following is equivalent to log-rank conjecture:

Conjecture

Any rank-r 0/1-matrix A has a submatrix with

◮ |B| ≥ 2−(log(r))O(1) ◮ B is monochromatic (except of a 1 8r-fraction of entries).

Thanks for your attention