Expanders via Local Edge Flips
Big Data & Sublinear Algorithms Workshop, DIMACS
Zeyuan Allen-Zhu Princeton University Aditya Bhaskara Google Lorenzo Orecchia Boston Univesity Silvio Lattanzi Google Vahab Mirrokni Google
Expanders via Local Edge Flips Zeyuan Allen-Zhu Vahab Mirrokni - - PowerPoint PPT Presentation
Expanders via Local Edge Flips Zeyuan Allen-Zhu Vahab Mirrokni Lorenzo Orecchia Aditya Bhaskara Silvio Lattanzi Princeton University Google Boston Univesity Google Google Big Data & Sublinear Algorithms Workshop, DIMACS Outline How
Big Data & Sublinear Algorithms Workshop, DIMACS
Zeyuan Allen-Zhu Princeton University Aditya Bhaskara Google Lorenzo Orecchia Boston Univesity Silvio Lattanzi Google Vahab Mirrokni Google
Big Data and Sublinear Algorithms Workshop, DIMACS
Big Data and Sublinear Algorithms Workshop, DIMACS
Big Data and Sublinear Algorithms Workshop, DIMACS
Big Data and Sublinear Algorithms Workshop, DIMACS
Big Data and Sublinear Algorithms Workshop, DIMACS
Big Data and Sublinear Algorithms Workshop, DIMACS
In the Local model it is possible to build an expander locally in
Big Data and Sublinear Algorithms Workshop, DIMACS
O(log2 n)
Big Data and Sublinear Algorithms Workshop, DIMACS
O(log2 n)
Construct Skip+ locally Skip+ has constant edge expansion and degree log n
In the Local model it is possible to build an expander locally in
In the Local model it is possible to build an expander locally in
Limitations:
Big Data and Sublinear Algorithms Workshop, DIMACS
O(log2 n)
Construct Skip+ locally Skip+ has constant edge expansion and degree log n
Big Data and Sublinear Algorithms Workshop, DIMACS
Big Data and Sublinear Algorithms Workshop, DIMACS
Big Data and Sublinear Algorithms Workshop, DIMACS
[McKay, Congressus Numerantium 1981]
A simple protocol: Pick two edges at random and invert their endpoints
Big Data and Sublinear Algorithms Workshop, DIMACS
[McKay, Congressus Numerantium 1981]
A simple protocol: Pick two edges at random and invert their endpoints
Big Data and Sublinear Algorithms Workshop, DIMACS
[McKay, Congressus Numerantium 1981]
A simple protocol: Pick two edges at random and invert their endpoints
Big Data and Sublinear Algorithms Workshop, DIMACS
[McKay, Congressus Numerantium 1981]
A simple protocol: Pick two edges at random and invert their endpoints
Creation of parallel edges/self-loops is allowed
Big Data and Sublinear Algorithms Workshop, DIMACS
[McKay, Congressus Numerantium 1981]
A simple protocol: Pick two edges at random and invert their endpoints
Creation of parallel edges/self-loops is allowed Limitation It is not local
It may disconnect the graph
Big Data and Sublinear Algorithms Workshop, DIMACS
[Mahlmann and Schindelhauer, SPAA 2005]
Pick a random length 3 path and invert its endpoints
Big Data and Sublinear Algorithms Workshop, DIMACS
[Mahlmann and Schindelhauer, SPAA 2005]
Pick a random length 3 path and invert its endpoints
Big Data and Sublinear Algorithms Workshop, DIMACS
[Mahlmann and Schindelhauer, SPAA 2005]
Pick a random length 3 path and invert its endpoints
Big Data and Sublinear Algorithms Workshop, DIMACS
[Mahlmann and Schindelhauer, SPAA 2005]
Pick a random length 3 path and invert its endpoints
Creation of parallel edges/self-loops is allowed
Big Data and Sublinear Algorithms Workshop, DIMACS
[Mahlmann and Schindelhauer, SPAA 2005]
Pick a random length 3 path and invert its endpoints
Creation of parallel edges/self-loops is allowed
Experimentally it seems to be really fast
Big Data and Sublinear Algorithms Workshop, DIMACS
[Cooper, Dyer and Greenhill, SODA 2005]
For d-regular graph the switch protocol converges to the configuration model in steps.
[Greenhill, SODA 2015]
For non regular graph with max degree in the switch protocol converges to the configuration model in steps. ˜ O
O √m
O
max
Big Data and Sublinear Algorithms Workshop, DIMACS
[Cooper, Dyer and Greenhill, SODA 2005]
For d-regular graph the switch protocol converges to the configuration model in steps.
[Greenhill, SODA 2015]
For non regular graph with max degree in the switch protocol converges to the configuration model in steps.
[Mahlmann and Schindelhauer, SPAA 2005]
For d-regular graph the flip protocol converges to the configuration model.
[Feder, Guetz, Mihail, and Saberi, FOCS 2006]
For d-regular graph the flip protocol converges to the configuration model in steps.
[Cooper and Dyer, PODC 2009]
For d-regular graph the flip protocol converges to the configuration model in steps. ˜ O
O √m
O
max
O
˜ O
Big Data and Sublinear Algorithms Workshop, DIMACS
[Mahlmann and Schindelhauer, SPAA 2005]
Experimentally switch and flips protocol transform any graph in an expander very quickly. Conjectures:
Switch converges on d-regular graph in steps. Flip converges on d-regular graph in steps.
O (nd log n) O (nd)
Big Data and Sublinear Algorithms Workshop, DIMACS
Big Data and Sublinear Algorithms Workshop, DIMACS
Starting from any d-regular graph, with , the switch protocol transforms the graph in an algebraic expander in steps. the flip protocol transforms the graph in an algebraic expander in steps.
O (nd) O ⇣ n2d2p log n ⌘ d ∈ Ω(log n)
Big Data and Sublinear Algorithms Workshop, DIMACS
Starting from any d-regular graph, with , the switch protocol transforms the graph in an algebraic expander in steps. the flip protocol transforms the graph in an algebraic expander in steps.
O (nd) O ⇣ n2d2p log n ⌘ d ∈ Ω(log n)
Big Data and Sublinear Algorithms Workshop, DIMACS
Dependencies. Small cuts may first become smaller and only later increase.
Big Data and Sublinear Algorithms Workshop, DIMACS
Dependencies. Small cuts may first become smaller and only later increase.
Pick a random edge.
Big Data and Sublinear Algorithms Workshop, DIMACS
Pick a random edge. One of the endpoints picks a neighbor at random(if in common, abort).
Big Data and Sublinear Algorithms Workshop, DIMACS
Pick a random edge. One of the endpoints picks a neighbor at random(if in common, abort).
Big Data and Sublinear Algorithms Workshop, DIMACS
Pick a random edge. One of the endpoints picks a neighbor at random(if in common, abort).
Big Data and Sublinear Algorithms Workshop, DIMACS
Pick a random edge. One of the endpoints picks a neighbor at random(if in common, abort). The other endpoint picks a random neighbor(if in common, picks a new one).
Big Data and Sublinear Algorithms Workshop, DIMACS
Pick a random edge. One of the endpoints picks a neighbor at random(if in common, abort). The other endpoint picks a random neighbor(if in common, picks a new one).
Big Data and Sublinear Algorithms Workshop, DIMACS
Pick a random edge. One of the endpoints picks a neighbor at random(if in common, abort). The other endpoint picks a random neighbor(if in common, picks a new one). Perform swap.
Big Data and Sublinear Algorithms Workshop, DIMACS
Pick a random edge. One of the endpoints picks a neighbor at random(if in common, abort). The other endpoint picks a random neighbor(if in common, picks a new one). Perform swap. Let
Big Data and Sublinear Algorithms Workshop, DIMACS
∆(t) = L ⇣ G(t+1)⌘ − L ⇣ G(t)⌘ E h ∆(t)|G(t)i = 4 d2n ✓ (d + 1)L(t) − ⇣ L(t)⌘2◆
Pick a random edge. One of the endpoints picks a neighbor at random(if in common, abort). The other endpoint picks a random neighbor(if in common, picks a new one). Perform swap. Let
Big Data and Sublinear Algorithms Workshop, DIMACS
∆(t) = L ⇣ G(t+1)⌘ − L ⇣ G(t)⌘ E h ∆(t)|G(t)i = 4 d2n ✓ (d + 1)L(t) − ⇣ L(t)⌘2◆
Nice term.
better expansion.
⇣ G(t)⌘2
Unfortunately we cannot argue directly on the expectation of the matrix after t step.
Big Data and Sublinear Algorithms Workshop, DIMACS
E h ∆(t)|G(t)i = 4 d2n ✓ (d + 1)L(t) − ⇣ L(t)⌘2◆
Unfortunately we cannot argue directly on the expectation of the matrix after t step. We use a classic potential used for matrix concentration: where
Big Data and Sublinear Algorithms Workshop, DIMACS
E h ∆(t)|G(t)i = 4 d2n ✓ (d + 1)L(t) − ⇣ L(t)⌘2◆ Φ(t) = ˆ tr ⇣ e− 20 log n
d
L(t)⌘
ˆ tr
= eλ1 + eλ2 + ...
Unfortunately we cannot argue directly on the expectation of the matrix after t step. We use a classic potential used for matrix concentration: where Note that in order to have very small all the eigenvalues need to be large.
Big Data and Sublinear Algorithms Workshop, DIMACS
E h ∆(t)|G(t)i = 4 d2n ✓ (d + 1)L(t) − ⇣ L(t)⌘2◆ Φ(t) = ˆ tr ⇣ e− 20 log n
d
L(t)⌘
ˆ tr
= eλ1 + eλ2 + ... Φ(t)
We want to show that the potential decreases
Big Data and Sublinear Algorithms Workshop, DIMACS
Φ(t+1) = ˆ tr ⇣ e− 20 log n
d
(L(t)+∆(t))⌘
We want to show that the potential decreases
by Golden-Thompson inequality
Big Data and Sublinear Algorithms Workshop, DIMACS
Φ(t+1) = ˆ tr ⇣ e− 20 log n
d
(L(t)+∆(t))⌘
= ˆ tr ⇣ e− 20 log n
d
L(t)e− 20 log n
d
∆(t)⌘
We want to show that the potential decreases
by Golden-Thompson inequality
by
Big Data and Sublinear Algorithms Workshop, DIMACS
Φ(t+1) = ˆ tr ⇣ e− 20 log n
d
(L(t)+∆(t))⌘
= ˆ tr ⇣ e− 20 log n
d
L(t)e− 20 log n
d
∆(t)⌘
= ˆ tr e− 20 log n
d
L(t)
I − 20 log n d ∆(t) + ✓20 log n d ∆(t) ◆2!!
e−A = I − A + A2
We want to show that the potential decreases
by Golden-Thompson inequality
by
Taking expectation:
Big Data and Sublinear Algorithms Workshop, DIMACS
Φ(t+1) = ˆ tr ⇣ e− 20 log n
d
(L(t)+∆(t))⌘
= ˆ tr ⇣ e− 20 log n
d
L(t)e− 20 log n
d
∆(t)⌘
= ˆ tr e− 20 log n
d
L(t)
I − 20 log n d ∆(t) + ✓20 log n d ∆(t) ◆2!!
e−A = I − A + A2
E h Φ(t+1)|Gti = Φ(t) − 4 log n d3n ˆ tr ✓ e− 20 log n
d
L(t) ✓
L(t) ✓d 2 ˆ I − L(t) ◆◆◆
We want to show that the potential decreases
by Golden-Thompson inequality
by
Taking expectation:
Big Data and Sublinear Algorithms Workshop, DIMACS
Φ(t+1) = ˆ tr ⇣ e− 20 log n
d
(L(t)+∆(t))⌘
= ˆ tr ⇣ e− 20 log n
d
L(t)e− 20 log n
d
∆(t)⌘
= ˆ tr e− 20 log n
d
L(t)
I − 20 log n d ∆(t) + ✓20 log n d ∆(t) ◆2!!
e−A = I − A + A2
E h Φ(t+1)|Gti = Φ(t) − 4 log n d3n ˆ tr ✓ e− 20 log n
d
L(t) ✓
L(t) ✓d 2 ˆ I − L(t) ◆◆◆
Using common diagonalization
Big Data and Sublinear Algorithms Workshop, DIMACS
X
1≤i≤n
e− 20 log n
d
λiλi(d/2 − λi)
Using common diagonalization Two interesting cases:
Big Data and Sublinear Algorithms Workshop, DIMACS
∀i : λi ≥ d 4 X
1≤i≤n
e− 20 log n
d
λiλi(d/2 − λi) ∈ O(n−3)
X
1≤i≤n
e− 20 log n
d
λiλi(d/2 − λi)
Using common diagonalization Two interesting cases: We look at:
Big Data and Sublinear Algorithms Workshop, DIMACS
X
1≤i≤n
e− 20 log n
d
λiλi(d/2 − λi)
∃i : λi < d 4
P
1≤i≤n e− 20 log n
d
λiλi(d/2 − λi)
Φ(t) = P
1≤i≤n e− 20 log n
d
λiλi(d/2 − λi)
P
1≤i≤n e− 20 log n
d
λi
Using common diagonalization Two interesting cases: We look at:
Big Data and Sublinear Algorithms Workshop, DIMACS
X
1≤i≤n
e− 20 log n
d
λiλi(d/2 − λi)
∃i : λi < d 4
P
1≤i≤n e− 20 log n
d
λiλi(d/2 − λi)
Φ(t) = P
1≤i≤n e− 20 log n
d
λiλi(d/2 − λi)
P
1≤i≤n e− 20 log n
d
λi
≈ P
1≤i≤k e− 20 log n
d
λiλi
P
1≤i≤k e− 20 log n
d
λi
∈ Ω ✓ 1 n√log n ◆
Using common diagonalization Two interesting cases: We look at:
Big Data and Sublinear Algorithms Workshop, DIMACS
X
1≤i≤n
e− 20 log n
d
λiλi(d/2 − λi)
∃i : λi < d 4
P
1≤i≤n e− 20 log n
d
λiλi(d/2 − λi)
Φ(t) = P
1≤i≤n e− 20 log n
d
λiλi(d/2 − λi)
P
1≤i≤n e− 20 log n
d
λi
≈ P
1≤i≤k e− 20 log n
d
λiλi
P
1≤i≤k e− 20 log n
d
λi
∈ Ω ✓ 1 n√log n ◆
∈ Ω ✓ d n√log n ◆
Thus: So in expectation is in after steps, hence using Markov inequality we get the result.
Big Data and Sublinear Algorithms Workshop, DIMACS
Φ(t) O(n−3) O(n2d2 log n)
E h Φ(t+1)|G(t)i = ✓ 1 − Ω ✓√log n n2d2 ◆◆ Φ(t) + O(n−3)
Big Data and Sublinear Algorithms Workshop, DIMACS
Expected additive improvement in a round can be O ✓ 1 n2d2 ◆
Big Data and Sublinear Algorithms Workshop, DIMACS
Big Data and Sublinear Algorithms Workshop, DIMACS
Big Data and Sublinear Algorithms Workshop, DIMACS
Big Data and Sublinear Algorithms Workshop, DIMACS