On the Log Rank Conjecture Thomas Rothvoss UW Seattle Current - - PowerPoint PPT Presentation
On the Log Rank Conjecture Thomas Rothvoss UW Seattle Current - - PowerPoint PPT Presentation
On the Log Rank Conjecture Thomas Rothvoss UW Seattle Current Topics Seminar Communication complexity Setting: Function f : X Y { 0 , 1 } Alice Bob Communication complexity Setting: Function f : X Y { 0 , 1 }
Communication complexity
Setting:
◮ Function f : X × Y → {0, 1}
Alice Bob
Communication complexity
Setting:
◮ Function f : X × Y → {0, 1} ◮ Players Alice and Bob agree apriori on a deterministic
communication protocoll Alice Bob
Communication complexity
Setting:
◮ Function f : X × Y → {0, 1} ◮ Players Alice and Bob agree apriori on a deterministic
communication protocoll
◮ Alice receives x ∈ X, Bob receives y ∈ X
Alice x ∈ X Bob y ∈ Y
Communication complexity
Setting:
◮ Function f : X × Y → {0, 1} ◮ Players Alice and Bob agree apriori on a deterministic
communication protocoll
◮ Alice receives x ∈ X, Bob receives y ∈ X ◮ They exchange messages to compute f(x, y)
Alice x ∈ X Bob y ∈ Y 1
Communication complexity
Setting:
◮ Function f : X × Y → {0, 1} ◮ Players Alice and Bob agree apriori on a deterministic
communication protocoll
◮ Alice receives x ∈ X, Bob receives y ∈ X ◮ They exchange messages to compute f(x, y)
Alice x ∈ X Bob y ∈ Y 1 1, 0
Communication complexity
Setting:
◮ Function f : X × Y → {0, 1} ◮ Players Alice and Bob agree apriori on a deterministic
communication protocoll
◮ Alice receives x ∈ X, Bob receives y ∈ X ◮ They exchange messages to compute f(x, y)
Alice x ∈ X Bob y ∈ Y 1 1, 0
Communication complexity
Setting:
◮ Function f : X × Y → {0, 1} ◮ Players Alice and Bob agree apriori on a deterministic
communication protocoll
◮ Alice receives x ∈ X, Bob receives y ∈ X ◮ They exchange messages to compute f(x, y)
Alice x ∈ X Bob y ∈ Y 1 1, 0 1 both know f(x, y)
Communication complexity
Setting:
◮ Function f : X × Y → {0, 1} ◮ Players Alice and Bob agree apriori on a deterministic
communication protocoll
◮ Alice receives x ∈ X, Bob receives y ∈ X ◮ They exchange messages to compute f(x, y)
Alice x ∈ X Bob y ∈ Y 1 1, 0 1 both know f(x, y) CC(f) = min
protocoll max x∈X,y∈Y {bits to compute f(x, y)}
Communication complexity (2)
Example:
◮ Input for Alice: x ∈ {0, 1}n ◮ Input for Bob: y ∈ {0, 1}n
f(x, y) = x1 + . . . + xn + y1 + . . . + yn mod 2
Communication complexity (2)
Example:
◮ Input for Alice: x ∈ {0, 1}n ◮ Input for Bob: y ∈ {0, 1}n
f(x, y) = x1 + . . . + xn + y1 + . . . + yn mod 2 A 1-bit protocoll: (1) Alice send x1 + . . . + xn mod 2 to Bob. (2) Bob then knows the answer.
Communication complexity (2)
Example:
◮ Input for Alice: x ∈ {0, 1}n ◮ Input for Bob: y ∈ {0, 1}n
f(x, y) = x1 + . . . + xn + y1 + . . . + yn mod 2 A 1-bit protocoll: (1) Alice send x1 + . . . + xn mod 2 to Bob. (2) Bob then knows the answer. even x
- dd x
- dd y
even y 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
Communication complexity (2)
Example:
◮ Function
EQ : {0, 1}n×{0, 1}n → {0, 1} EQ(x, y) =
- 1
x = y
- therwise
Communication complexity (2)
Example:
◮ Function
EQ : {0, 1}n×{0, 1}n → {0, 1} EQ(x, y) =
- 1
x = y
- therwise
◮ Complexity theory 101: CC(EQ) = n
Communication complexity (2)
Example:
◮ Function
EQ : {0, 1}n×{0, 1}n → {0, 1} EQ(x, y) =
- 1
x = y
- therwise
◮ Complexity theory 101: CC(EQ) = n
1 1 1 1 X Y
Communication complexity (3)
decision tree: 1 1 1
1 1 1 1 0 0 0 0 0 1 1 1 1 1 1 1 1 1
Communication complexity (3)
decision tree: 1 1 1
1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 1 1 1 1 1 1 1 1 1
Communication complexity (3)
decision tree: 1 1 1
1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 1 1 1 1 1 1 1 1 1
v Observations:
◮ For a leave v of tree, Rv := {(x, y) : protocoll ends in v} is
a monochromatic rectangle
Communication complexity (3)
decision tree: 1 1 1
1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 1 1 1 1 1 1 1 1 1
v Observations:
◮ For a leave v of tree, Rv := {(x, y) : protocoll ends in v} is
a monochromatic rectangle
◮ A protocol exchanging k bits ⇒ rank(f) ≤ 2k
Communication complexity (3)
decision tree: 1 1 1
1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 1 1 1 1 1 1 1 1 1
v Observations:
◮ For a leave v of tree, Rv := {(x, y) : protocoll ends in v} is
a monochromatic rectangle
◮ A protocol exchanging k bits ⇒ rank(f) ≤ 2k ◮ CC(f) ≥ log rank(f)
Relation to rank
◮ CC(f) ≥ log rank(f) ◮ CC(f) ≤ rank(f).
Relation to rank
◮ CC(f) ≥ log rank(f) ◮ CC(f) ≤ rank(f).
Conjecture (Lov´ asz & Saks ’88)
CC(f) ≤ (log rank(f))O(1).
Relation to rank
◮ CC(f) ≥ log rank(f) ◮ CC(f) ≤ rank(f).
Conjecture (Lov´ asz & Saks ’88)
CC(f) ≤ (log rank(f))O(1).
Theorem (Lovett ’14)
CC(f) ≤ ˜ O(
- rank(f)).
Relation to rank
◮ CC(f) ≥ log rank(f) ◮ CC(f) ≤ rank(f).
Conjecture (Lov´ asz & Saks ’88)
CC(f) ≤ (log rank(f))O(1).
Theorem (Lovett ’14)
CC(f) ≤ ˜ O(
- rank(f)).
◮ Here: A much shorter and direct proof by me.
The technical main result
◮ It suffices to show
Lemma
Any 0/1 matrix A
A
1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 1 1 1 1 0 1 1 1 0 1 1 1 1 0 1 0
The technical main result
◮ It suffices to show
Lemma
Any 0/1 matrix A has a monochromatic submatrix R of size |R| ≥ 2− ˜
O(√ rank(A)) · |A|
A
1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 1 1 1 1 0 1 1 1 0 1 1 1 1 0 1 0
The technical main result
◮ It suffices to show
Lemma
Any 0/1 matrix A has a almost monochromatic submatrix R
- f size
|R| ≥ 2− ˜
O(√ rank(A)) · |A| ◮ Almost means #zeroes #ones ≤ 1 8·rank(A)
A
1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 1 1 1 1 0 1 1 1 0 1 1 1 1 0 1 0
Thoughts about rank
Let A be a 0/1 matrix of rank r. By definition, Aij = ui, vj with ui, vj ∈ Rr
Thoughts about rank
Let A be a 0/1 matrix of rank r. By definition, Aij = ui, vj with ui, vj ∈ Rr ui vj
Thoughts about rank
Let A be a 0/1 matrix of rank r. By definition, Aij = ui, vj with ui, vj ∈ Rr ui vj
◮ For any regular matrix T: u′ i := Tui & v′ j := (T −1)Tvj
⇒
- u′
i, v′ j
- = ui, vj
Thoughts about rank
Let A be a 0/1 matrix of rank r. By definition, Aij = ui, vj with ui, vj ∈ Rr ui vj
◮ For any regular matrix T: u′ i := Tui & v′ j := (T −1)Tvj
⇒
- u′
i, v′ j
- = ui, vj
Lemma
Vectors can be chosen so that ui2, vj2 ≤ r1/4 ∀i, j
John’s theorem
John’s Theorem
For any symmetric convex body K ⊆ Rn, there is an ellipsoid E so that E ⊆ K ⊆ √n · E. K
John’s theorem
John’s Theorem
For any symmetric convex body K ⊆ Rn, there is an ellipsoid E so that E ⊆ K ⊆ √n · E. K E
John’s theorem
John’s Theorem
For any symmetric convex body K ⊆ Rn, there is an ellipsoid E so that E ⊆ K ⊆ √n · E. K E √n · E
◮ T : Rn → Rn linear map ⇒ T(ball) is an ellipsoid
John’s Theorem (2)
Proof: K
John’s Theorem (2)
Proof: K
◮ Suppose the maximum volume ellipsoid in K is a unit
ball.
John’s Theorem (2)
Proof: K x
◮ Suppose the maximum volume ellipsoid in K is a unit
ball.
◮ Suppose some point x ∈ K has x2 > √n
John’s Theorem (2)
Proof: K x
◮ Suppose the maximum volume ellipsoid in K is a unit
ball.
◮ Suppose some point x ∈ K has x2 > √n
John’s Theorem (2)
Proof: K x E
◮ Suppose the maximum volume ellipsoid in K is a unit
ball.
◮ Suppose some point x ∈ K has x2 > √n ◮ Stretch ball along x; shrink orthogonal
John’s Theorem (2)
Proof: K x E
◮ Suppose the maximum volume ellipsoid in K is a unit
ball.
◮ Suppose some point x ∈ K has x2 > √n ◮ Stretch ball along x; shrink orthogonal ◮ vol(E) > vol(ball)
Rescaling vectors
◮ Given r-dim. vectors with ui, vj ∈ {0, 1}
Rescaling vectors
K ui −ui
◮ Given r-dim. vectors with ui, vj ∈ {0, 1} ◮ Choose K := conv{±ui | i row index}
Rescaling vectors
K r−1/4B r1/4B ui −ui
◮ Given r-dim. vectors with ui, vj ∈ {0, 1} ◮ Choose K := conv{±ui | i row index} ◮ After linear transformation: 1 r1/4B ⊆ K ⊆ r1/4B
Rescaling vectors
K vj r−1/4B r1/4B ui −ui
◮ Given r-dim. vectors with ui, vj ∈ {0, 1} ◮ Choose K := conv{±ui | i row index} ◮ After linear transformation: 1 r1/4B ⊆ K ⊆ r1/4B ◮ Claim: vj2 ≤ r1/4
Rescaling vectors
K vj r−1/4B r1/4B ui
◮ Given r-dim. vectors with ui, vj ∈ {0, 1} ◮ Choose K := conv{±ui | i row index} ◮ After linear transformation: 1 r1/4B ⊆ K ⊆ r1/4B ◮ Claim: vj2 ≤ r1/4
1 ≥ max
i
| ui, vj |
Rescaling vectors
K vj r−1/4B r1/4B w ui
◮ Given r-dim. vectors with ui, vj ∈ {0, 1} ◮ Choose K := conv{±ui | i row index} ◮ After linear transformation: 1 r1/4B ⊆ K ⊆ r1/4B ◮ Claim: vj2 ≤ r1/4
1 ≥ max
i
| ui, vj | ≥ w, vj
Rescaling vectors
K vj r−1/4B r1/4B w ui
◮ Given r-dim. vectors with ui, vj ∈ {0, 1} ◮ Choose K := conv{±ui | i row index} ◮ After linear transformation: 1 r1/4B ⊆ K ⊆ r1/4B ◮ Claim: vj2 ≤ r1/4
1 ≥ max
i
| ui, vj | ≥ w, vj = 1 r1/4vj2
Back to our problem A
1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 1 1 1 1 0 1 1 1 0 1 1 1 1 0 1 0 ui vj ui vj
◮ There are unit vectors ui, vj so that
ui, vj =
- if Aij = 0
1 √r
if Aij = 1
Back to our problem A
1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 1 1 1 1 0 1 1 1 0 1 1 1 1 0 1 0 ui vj ui vj
◮ There are unit vectors ui, vj so that
ui, vj =
- if Aij = 0
1 √r
if Aij = 1
Sheppard’s formula
Sheppard’s formula
For unit vectors u, v ∈ R2, take a random direction g. Then Pr[g, u ≥ 0 and g, v ≥ 0] = 1 2
- 1 − arccos(u, v)
π
- u
v 0.1 0.2 0.3 0.4 0.5 0.5 1.0 −0.5 −1.0 α
1 2(1 − arccos(α) π
)
Sheppard’s formula
Sheppard’s formula
For unit vectors u, v ∈ R2, take a random direction g. Then Pr[g, u ≥ 0 and g, v ≥ 0] = 1 2
- 1 − arccos(u, v)
π
- g
u v 0.1 0.2 0.3 0.4 0.5 0.5 1.0 −0.5 −1.0 α
1 2(1 − arccos(α) π
)
Sheppard’s formula
Sheppard’s formula
For unit vectors u, v ∈ R2, take a random direction g. Then Pr[g, u ≥ 0 and g, v ≥ 0] ≈ 1 4 + const · u, v g u v 0.1 0.2 0.3 0.4 0.5 0.5 1.0 −0.5 −1.0 α
1 2(1 − arccos(α) π
)
Finding an almost monochr. submatrix
◮ Suppose A is 0/1 matrix with
#ones(A) ≥ #zeroes(A)
A
1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 1 1 1 1 0 1 1 1 0 1 1 1 1 0 1 0
Finding an almost monochr. submatrix
◮ Suppose A is 0/1 matrix with
#ones(A) ≥ #zeroes(A)
◮ Sample random Gaussian g
B = {i : g, ui ≥ 0} × {j : g, vj ≥ 0}
A
1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 1 1 1 1 0 1 1 1 0 1 1 1 1 0 1 0 B
Finding an almost monochr. submatrix
◮ Suppose A is 0/1 matrix with
#ones(A) ≥ #zeroes(A)
◮ Sample random Gaussian g
B = {i : g, ui ≥ 0} × {j : g, vj ≥ 0}
A
1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 1 1 1 1 0 1 1 1 0 1 1 1 1 0 1 0 B
◮ We know
E #zeroes in B #ones in B
- ≈
1 4 1 4 + 1 √r
= 1 − Θ( 1
√r)
E[fract of entries in B] ≈ 1 4
Finding an almost monochr. submatrix
◮ Suppose A is 0/1 matrix with
#ones(A) ≥ #zeroes(A)
◮ Sample random T := Θ(√r log(r))
Gaussians g1, . . . , gT B = {i : gt, ui ≥ 0 ∀t ∈ [T]} × {j : gt, vj ≥ 0 ∀t ∈ [T]}
A
1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 1 1 1 1 0 1 1 1 0 1 1 1 1 0 1 0 B
◮ We know
E #zeroes in B #ones in B
- ≈
1 4 1 4 + 1 √r
= 1 − Θ( 1
√r)
E[fract of entries in B] ≈ 1 4
Finding an almost monochr. submatrix
◮ Suppose A is 0/1 matrix with
#ones(A) ≥ #zeroes(A)
◮ Sample random T := Θ(√r log(r))
Gaussians g1, . . . , gT B = {i : gt, ui ≥ 0 ∀t ∈ [T]} × {j : gt, vj ≥ 0 ∀t ∈ [T]}
A
1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 1 1 1 1 0 1 1 1 0 1 1 1 1 0 1 0 B
◮ We know
E #zeroes in B #ones in B
- ≈
1 4 1 4 + 1 √r
= 1 − Θ( 1
√r)
E[fract of entries in B] ≈ 1 4
Finding an almost monochr. submatrix
◮ Suppose A is 0/1 matrix with
#ones(A) ≥ #zeroes(A)
◮ Sample random T := Θ(√r log(r))
Gaussians g1, . . . , gT B = {i : gt, ui ≥ 0 ∀t ∈ [T]} × {j : gt, vj ≥ 0 ∀t ∈ [T]}
A
1 0 1 0 1 0 1 0 0 1 0 1 0 1 0 1 1 0 1 0 1 0 1 0 1 1 1 1 0 0 1 1 0 1 1 1 1 0 1 0 0 1 1 1 1 0 1 0 1 1 1 1 0 1 1 1 0 1 1 1 1 0 1 0 B
◮ We know
E #zeroes in B #ones in B
- ≈
- 1
4 1 4 + 1 √r
T = 1 8r E[fract of entries in B] ≈ 1 4 T = 2−Θ(√r log r)
The end
◮ The following is equivalent to log-rank conjecture:
Conjecture
Any rank-r 0/1-matrix A has a submatrix with
◮ |B| ≥ 2−(log(r))O(1) ◮ B is monochromatic (except of a 1 8r-fraction of entries).
The end
◮ The following is equivalent to log-rank conjecture:
Conjecture
Any rank-r 0/1-matrix A has a submatrix with
◮ |B| ≥ 2−(log(r))O(1) ◮ B is monochromatic (except of a 1 8r-fraction of entries).