Today. Continue markov chain mixing analysis. Today. Continue - - PowerPoint PPT Presentation
Today. Continue markov chain mixing analysis. Today. Continue - - PowerPoint PPT Presentation
Today. Continue markov chain mixing analysis. Today. Continue markov chain mixing analysis. Prove hard side of Cheeger. Analyzing random walks on graph. Start at vertex, go to random neighbor. Analyzing random walks on graph. Start at
Today.
Continue markov chain mixing analysis. Prove “hard side” of Cheeger.
Analyzing random walks on graph.
Start at vertex, go to random neighbor.
Analyzing random walks on graph.
Start at vertex, go to random neighbor. For d-regular graph: eventually uniform.
Analyzing random walks on graph.
Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite.
Analyzing random walks on graph.
Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd
Analyzing random walks on graph.
Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step!
Analyzing random walks on graph.
Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse?
Analyzing random walks on graph.
Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M.
Analyzing random walks on graph.
Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix.
Analyzing random walks on graph.
Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1.
Analyzing random walks on graph.
Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i.
Analyzing random walks on graph.
Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt.
Analyzing random walks on graph.
Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt
Analyzing random walks on graph.
Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt Each node is average over neighbors.
Analyzing random walks on graph.
Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt Each node is average over neighbors. Evolution? Random walk
Analyzing random walks on graph.
Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt Each node is average over neighbors. Evolution? Random walk starts at 1,
Analyzing random walks on graph.
Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt Each node is average over neighbors. Evolution? Random walk starts at 1, distribution e1 = [1,0,...,0].
Analyzing random walks on graph.
Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt Each node is average over neighbors. Evolution? Random walk starts at 1, distribution e1 = [1,0,...,0]. Mtv1 = 1
N v1 +∑i>1 λ t i αivi.
Analyzing random walks on graph.
Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt Each node is average over neighbors. Evolution? Random walk starts at 1, distribution e1 = [1,0,...,0]. Mtv1 = 1
N v1 +∑i>1 λ t i αivi.
v1 = [ 1
N ,..., 1 N ]
Analyzing random walks on graph.
Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt Each node is average over neighbors. Evolution? Random walk starts at 1, distribution e1 = [1,0,...,0]. Mtv1 = 1
N v1 +∑i>1 λ t i αivi.
v1 = [ 1
N ,..., 1 N ] →Uniform distribution.
Analyzing random walks on graph.
Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt Each node is average over neighbors. Evolution? Random walk starts at 1, distribution e1 = [1,0,...,0]. Mtv1 = 1
N v1 +∑i>1 λ t i αivi.
v1 = [ 1
N ,..., 1 N ] →Uniform distribution.
Doh!
Analyzing random walks on graph.
Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt Each node is average over neighbors. Evolution? Random walk starts at 1, distribution e1 = [1,0,...,0]. Mtv1 = 1
N v1 +∑i>1 λ t i αivi.
v1 = [ 1
N ,..., 1 N ] →Uniform distribution.
Doh! What if bipartite?
Analyzing random walks on graph.
Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt Each node is average over neighbors. Evolution? Random walk starts at 1, distribution e1 = [1,0,...,0]. Mtv1 = 1
N v1 +∑i>1 λ t i αivi.
v1 = [ 1
N ,..., 1 N ] →Uniform distribution.
Doh! What if bipartite? Negative eigenvalues of value -1:
Analyzing random walks on graph.
Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt Each node is average over neighbors. Evolution? Random walk starts at 1, distribution e1 = [1,0,...,0]. Mtv1 = 1
N v1 +∑i>1 λ t i αivi.
v1 = [ 1
N ,..., 1 N ] →Uniform distribution.
Doh! What if bipartite? Negative eigenvalues of value -1: (+1,−1) on two sides.
Analyzing random walks on graph.
Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt Each node is average over neighbors. Evolution? Random walk starts at 1, distribution e1 = [1,0,...,0]. Mtv1 = 1
N v1 +∑i>1 λ t i αivi.
v1 = [ 1
N ,..., 1 N ] →Uniform distribution.
Doh! What if bipartite? Negative eigenvalues of value -1: (+1,−1) on two sides. Side question: Why the same size?
Analyzing random walks on graph.
Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt Each node is average over neighbors. Evolution? Random walk starts at 1, distribution e1 = [1,0,...,0]. Mtv1 = 1
N v1 +∑i>1 λ t i αivi.
v1 = [ 1
N ,..., 1 N ] →Uniform distribution.
Doh! What if bipartite? Negative eigenvalues of value -1: (+1,−1) on two sides. Side question: Why the same size? Assumed regular graph.
Fix-it-up chappie!
“Lazy” random walk:
Fix-it-up chappie!
“Lazy” random walk: With probability 1/2 stay at current vertex.
Fix-it-up chappie!
“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M
2
Fix-it-up chappie!
“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M
2
Eigenvalues: 1+λi
2
Fix-it-up chappie!
“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M
2
Eigenvalues: 1+λi
2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi
Fix-it-up chappie!
“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M
2
Eigenvalues: 1+λi
2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi
Eigenvalues in interval [0,1].
Fix-it-up chappie!
“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M
2
Eigenvalues: 1+λi
2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi
Eigenvalues in interval [0,1]. Spectral gap: 1−λ2
2
= µ
2 .
Fix-it-up chappie!
“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M
2
Eigenvalues: 1+λi
2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi
Eigenvalues in interval [0,1]. Spectral gap: 1−λ2
2
= µ
2 .
Uniform distribution: π = [ 1
N ,..., 1 N ]
Fix-it-up chappie!
“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M
2
Eigenvalues: 1+λi
2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi
Eigenvalues in interval [0,1]. Spectral gap: 1−λ2
2
= µ
2 .
Uniform distribution: π = [ 1
N ,..., 1 N ]
Distance to uniform:
Fix-it-up chappie!
“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M
2
Eigenvalues: 1+λi
2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi
Eigenvalues in interval [0,1]. Spectral gap: 1−λ2
2
= µ
2 .
Uniform distribution: π = [ 1
N ,..., 1 N ]
Distance to uniform: d1(vt,π) = ∑i |(vt)i −πi|
Fix-it-up chappie!
“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M
2
Eigenvalues: 1+λi
2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi
Eigenvalues in interval [0,1]. Spectral gap: 1−λ2
2
= µ
2 .
Uniform distribution: π = [ 1
N ,..., 1 N ]
Distance to uniform: d1(vt,π) = ∑i |(vt)i −πi| “Rapidly mixing”: d1(vt,π) ≤ ε in poly(logN,log 1
ε ) time.
Fix-it-up chappie!
“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M
2
Eigenvalues: 1+λi
2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi
Eigenvalues in interval [0,1]. Spectral gap: 1−λ2
2
= µ
2 .
Uniform distribution: π = [ 1
N ,..., 1 N ]
Distance to uniform: d1(vt,π) = ∑i |(vt)i −πi| “Rapidly mixing”: d1(vt,π) ≤ ε in poly(logN,log 1
ε ) time.
When is chain rapidly mixing?
Fix-it-up chappie!
“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M
2
Eigenvalues: 1+λi
2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi
Eigenvalues in interval [0,1]. Spectral gap: 1−λ2
2
= µ
2 .
Uniform distribution: π = [ 1
N ,..., 1 N ]
Distance to uniform: d1(vt,π) = ∑i |(vt)i −πi| “Rapidly mixing”: d1(vt,π) ≤ ε in poly(logN,log 1
ε ) time.
When is chain rapidly mixing? Another measure:
Fix-it-up chappie!
“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M
2
Eigenvalues: 1+λi
2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi
Eigenvalues in interval [0,1]. Spectral gap: 1−λ2
2
= µ
2 .
Uniform distribution: π = [ 1
N ,..., 1 N ]
Distance to uniform: d1(vt,π) = ∑i |(vt)i −πi| “Rapidly mixing”: d1(vt,π) ≤ ε in poly(logN,log 1
ε ) time.
When is chain rapidly mixing? Another measure: d2(vt,π) = ∑i((vt)i −πi)2.
Fix-it-up chappie!
“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M
2
Eigenvalues: 1+λi
2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi
Eigenvalues in interval [0,1]. Spectral gap: 1−λ2
2
= µ
2 .
Uniform distribution: π = [ 1
N ,..., 1 N ]
Distance to uniform: d1(vt,π) = ∑i |(vt)i −πi| “Rapidly mixing”: d1(vt,π) ≤ ε in poly(logN,log 1
ε ) time.
When is chain rapidly mixing? Another measure: d2(vt,π) = ∑i((vt)i −πi)2. Note: d1(vt,π) ≤ √ Nd2(vt,π)
Fix-it-up chappie!
“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M
2
Eigenvalues: 1+λi
2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi
Eigenvalues in interval [0,1]. Spectral gap: 1−λ2
2
= µ
2 .
Uniform distribution: π = [ 1
N ,..., 1 N ]
Distance to uniform: d1(vt,π) = ∑i |(vt)i −πi| “Rapidly mixing”: d1(vt,π) ≤ ε in poly(logN,log 1
ε ) time.
When is chain rapidly mixing? Another measure: d2(vt,π) = ∑i((vt)i −πi)2. Note: d1(vt,π) ≤ √ Nd2(vt,π) n – “size” of vertex,
Fix-it-up chappie!
“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M
2
Eigenvalues: 1+λi
2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi
Eigenvalues in interval [0,1]. Spectral gap: 1−λ2
2
= µ
2 .
Uniform distribution: π = [ 1
N ,..., 1 N ]
Distance to uniform: d1(vt,π) = ∑i |(vt)i −πi| “Rapidly mixing”: d1(vt,π) ≤ ε in poly(logN,log 1
ε ) time.
When is chain rapidly mixing? Another measure: d2(vt,π) = ∑i((vt)i −πi)2. Note: d1(vt,π) ≤ √ Nd2(vt,π) n – “size” of vertex, µ ≥
1 p(n) for poly p(n),
Fix-it-up chappie!
“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M
2
Eigenvalues: 1+λi
2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi
Eigenvalues in interval [0,1]. Spectral gap: 1−λ2
2
= µ
2 .
Uniform distribution: π = [ 1
N ,..., 1 N ]
Distance to uniform: d1(vt,π) = ∑i |(vt)i −πi| “Rapidly mixing”: d1(vt,π) ≤ ε in poly(logN,log 1
ε ) time.
When is chain rapidly mixing? Another measure: d2(vt,π) = ∑i((vt)i −πi)2. Note: d1(vt,π) ≤ √ Nd2(vt,π) n – “size” of vertex, µ ≥
1 p(n) for poly p(n), t = O(p(n)logN).
Fix-it-up chappie!
“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M
2
Eigenvalues: 1+λi
2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi
Eigenvalues in interval [0,1]. Spectral gap: 1−λ2
2
= µ
2 .
Uniform distribution: π = [ 1
N ,..., 1 N ]
Distance to uniform: d1(vt,π) = ∑i |(vt)i −πi| “Rapidly mixing”: d1(vt,π) ≤ ε in poly(logN,log 1
ε ) time.
When is chain rapidly mixing? Another measure: d2(vt,π) = ∑i((vt)i −πi)2. Note: d1(vt,π) ≤ √ Nd2(vt,π) n – “size” of vertex, µ ≥
1 p(n) for poly p(n), t = O(p(n)logN).
d2(vt,π) = |Ate1 −π|2 ≤
- (1+λ2)
2
2t ≤ (1−
1 2p(n))2t ≤ 1
poly(N)
Fix-it-up chappie!
“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M
2
Eigenvalues: 1+λi
2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi
Eigenvalues in interval [0,1]. Spectral gap: 1−λ2
2
= µ
2 .
Uniform distribution: π = [ 1
N ,..., 1 N ]
Distance to uniform: d1(vt,π) = ∑i |(vt)i −πi| “Rapidly mixing”: d1(vt,π) ≤ ε in poly(logN,log 1
ε ) time.
When is chain rapidly mixing? Another measure: d2(vt,π) = ∑i((vt)i −πi)2. Note: d1(vt,π) ≤ √ Nd2(vt,π) n – “size” of vertex, µ ≥
1 p(n) for poly p(n), t = O(p(n)logN).
d2(vt,π) = |Ate1 −π|2 ≤
- (1+λ2)
2
2t ≤ (1−
1 2p(n))2t ≤ 1
poly(N) Rapidly mixing with big (≥
1 p(n)) spectral gap.
Rapid mixing, volume, and surface area..
Recall volume of convex body.
Rapid mixing, volume, and surface area..
Recall volume of convex body. Grid graph on grid points inside convex body.
Rapid mixing, volume, and surface area..
Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger:
Rapid mixing, volume, and surface area..
Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ
2 ≤ h(G) ≤
- 2µ.
Rapid mixing, volume, and surface area..
Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ
2 ≤ h(G) ≤
- 2µ.
Lower bound expansion
Rapid mixing, volume, and surface area..
Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ
2 ≤ h(G) ≤
- 2µ.
Lower bound expansion → lower bounds on spectral gap µ
Rapid mixing, volume, and surface area..
Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ
2 ≤ h(G) ≤
- 2µ.
Lower bound expansion → lower bounds on spectral gap µ → Upper bound mixing time.
Rapid mixing, volume, and surface area..
Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ
2 ≤ h(G) ≤
- 2µ.
Lower bound expansion → lower bounds on spectral gap µ → Upper bound mixing time. h(G) ≈ Surface Area Volume
Rapid mixing, volume, and surface area..
Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ
2 ≤ h(G) ≤
- 2µ.
Lower bound expansion → lower bounds on spectral gap µ → Upper bound mixing time. h(G) ≈ Surface Area Volume
Rapid mixing, volume, and surface area..
Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ
2 ≤ h(G) ≤
- 2µ.
Lower bound expansion → lower bounds on spectral gap µ → Upper bound mixing time. h(G) ≈ Surface Area Volume Isoperimetric inequality.
Rapid mixing, volume, and surface area..
Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ
2 ≤ h(G) ≤
- 2µ.
Lower bound expansion → lower bounds on spectral gap µ → Upper bound mixing time. h(G) ≈ Surface Area Volume Isoperimetric inequality. Voln−1(S,S) ≥ min(Vol(S),Vol(S))
diam(P)
Rapid mixing, volume, and surface area..
Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ
2 ≤ h(G) ≤
- 2µ.
Lower bound expansion → lower bounds on spectral gap µ → Upper bound mixing time. h(G) ≈ Surface Area Volume Isoperimetric inequality. Voln−1(S,S) ≥ min(Vol(S),Vol(S))
diam(P)
Rapid mixing, volume, and surface area..
Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ
2 ≤ h(G) ≤
- 2µ.
Lower bound expansion → lower bounds on spectral gap µ → Upper bound mixing time. h(G) ≈ Surface Area Volume Isoperimetric inequality. Voln−1(S,S) ≥ min(Vol(S),Vol(S))
diam(P)
Rapid mixing, volume, and surface area..
Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ
2 ≤ h(G) ≤
- 2µ.
Lower bound expansion → lower bounds on spectral gap µ → Upper bound mixing time. h(G) ≈ Surface Area Volume Isoperimetric inequality. Voln−1(S,S) ≥ min(Vol(S),Vol(S))
diam(P)
Edges ∝ surface area,
Rapid mixing, volume, and surface area..
Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ
2 ≤ h(G) ≤
- 2µ.
Lower bound expansion → lower bounds on spectral gap µ → Upper bound mixing time. h(G) ≈ Surface Area Volume Isoperimetric inequality. Voln−1(S,S) ≥ min(Vol(S),Vol(S))
diam(P)
Edges ∝ surface area, Assume Diam(P) ≤ p′(n)
Rapid mixing, volume, and surface area..
Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ
2 ≤ h(G) ≤
- 2µ.
Lower bound expansion → lower bounds on spectral gap µ → Upper bound mixing time. h(G) ≈ Surface Area Volume Isoperimetric inequality. Voln−1(S,S) ≥ min(Vol(S),Vol(S))
diam(P)
Edges ∝ surface area, Assume Diam(P) ≤ p′(n) → h(G) ≥ 1/p′(n)
Rapid mixing, volume, and surface area..
Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ
2 ≤ h(G) ≤
- 2µ.
Lower bound expansion → lower bounds on spectral gap µ → Upper bound mixing time. h(G) ≈ Surface Area Volume Isoperimetric inequality. Voln−1(S,S) ≥ min(Vol(S),Vol(S))
diam(P)
Edges ∝ surface area, Assume Diam(P) ≤ p′(n) → h(G) ≥ 1/p′(n) → µ > 1/2p′(n)2
Rapid mixing, volume, and surface area..
Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ
2 ≤ h(G) ≤
- 2µ.
Lower bound expansion → lower bounds on spectral gap µ → Upper bound mixing time. h(G) ≈ Surface Area Volume Isoperimetric inequality. Voln−1(S,S) ≥ min(Vol(S),Vol(S))
diam(P)
Edges ∝ surface area, Assume Diam(P) ≤ p′(n) → h(G) ≥ 1/p′(n) → µ > 1/2p′(n)2 → O(p′(n)2 logN) convergence for Markov chain on BIG GRAPH.
Rapid mixing, volume, and surface area..
Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ
2 ≤ h(G) ≤
- 2µ.
Lower bound expansion → lower bounds on spectral gap µ → Upper bound mixing time. h(G) ≈ Surface Area Volume Isoperimetric inequality. Voln−1(S,S) ≥ min(Vol(S),Vol(S))
diam(P)
Edges ∝ surface area, Assume Diam(P) ≤ p′(n) → h(G) ≥ 1/p′(n) → µ > 1/2p′(n)2 → O(p′(n)2 logN) convergence for Markov chain on BIG GRAPH. → Rapidly mixing chain:
Khachiyan’s algorithm for counting partial orders.
Given partial order on x1,...,xn.
Khachiyan’s algorithm for counting partial orders.
Given partial order on x1,...,xn. Sample from uniform distribution over total orders.
Khachiyan’s algorithm for counting partial orders.
Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering.
Khachiyan’s algorithm for counting partial orders.
Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair
Khachiyan’s algorithm for counting partial orders.
Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order.
Khachiyan’s algorithm for counting partial orders.
Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain?
Khachiyan’s algorithm for counting partial orders.
Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain? Map into d-dimensional unit cube.
Khachiyan’s algorithm for counting partial orders.
Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain? Map into d-dimensional unit cube. xi < xj corresponds to halfspace (one side of hyperplane) of cube.
Khachiyan’s algorithm for counting partial orders.
Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain? Map into d-dimensional unit cube. xi < xj corresponds to halfspace (one side of hyperplane) of cube. “dimension i = dimension j”
Khachiyan’s algorithm for counting partial orders.
Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain? Map into d-dimensional unit cube. xi < xj corresponds to halfspace (one side of hyperplane) of cube. “dimension i = dimension j” total order is intersection of n halfspaces.
Khachiyan’s algorithm for counting partial orders.
Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain? Map into d-dimensional unit cube. xi < xj corresponds to halfspace (one side of hyperplane) of cube. “dimension i = dimension j” total order is intersection of n halfspaces. each of volume: 1
n!.
Khachiyan’s algorithm for counting partial orders.
Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain? Map into d-dimensional unit cube. xi < xj corresponds to halfspace (one side of hyperplane) of cube. “dimension i = dimension j” total order is intersection of n halfspaces. each of volume: 1
n!.
since each total order is disjoint
Khachiyan’s algorithm for counting partial orders.
Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain? Map into d-dimensional unit cube. xi < xj corresponds to halfspace (one side of hyperplane) of cube. “dimension i = dimension j” total order is intersection of n halfspaces. each of volume: 1
n!.
since each total order is disjoint and together cover cube.
Khachiyan’s algorithm for counting partial orders.
Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain? Map into d-dimensional unit cube. xi < xj corresponds to halfspace (one side of hyperplane) of cube. “dimension i = dimension j” total order is intersection of n halfspaces. each of volume: 1
n!.
since each total order is disjoint and together cover cube.
(0,0)
Khachiyan’s algorithm for counting partial orders.
Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain? Map into d-dimensional unit cube. xi < xj corresponds to halfspace (one side of hyperplane) of cube. “dimension i = dimension j” total order is intersection of n halfspaces. each of volume: 1
n!.
since each total order is disjoint and together cover cube.
(0,0) x1 > x2
Khachiyan’s algorithm for counting partial orders.
Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain? Map into d-dimensional unit cube. xi < xj corresponds to halfspace (one side of hyperplane) of cube. “dimension i = dimension j” total order is intersection of n halfspaces. each of volume: 1
n!.
since each total order is disjoint and together cover cube.
(0,0) x1 > x2 x1 > x2
Khachiyan’s algorithm for counting partial orders.
Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain? Map into d-dimensional unit cube. xi < xj corresponds to halfspace (one side of hyperplane) of cube. “dimension i = dimension j” total order is intersection of n halfspaces. each of volume: 1
n!.
since each total order is disjoint and together cover cube.
(0,0) x1 > x2 x1 > x2 x1 > x3
Khachiyan’s algorithm for counting partial orders.
Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain? Map into d-dimensional unit cube. xi < xj corresponds to halfspace (one side of hyperplane) of cube. “dimension i = dimension j” total order is intersection of n halfspaces. each of volume: 1
n!.
since each total order is disjoint and together cover cube.
(0,0) x1 > x2 x1 > x2 x1 > x3 x2 > x3
Khachiyan’s algorithm for counting partial orders.
Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain? Map into d-dimensional unit cube. xi < xj corresponds to halfspace (one side of hyperplane) of cube. “dimension i = dimension j” total order is intersection of n halfspaces. each of volume: 1
n!.
since each total order is disjoint and together cover cube.
(0,0) x1 > x2 x1 > x2 x1 > x3 x2 > x3
(0,0)
(0,0) x1 > x2
(0,0) x1 > x2 x1 > x2
(0,0) x1 > x2 x1 > x2 x1 > x3
(0,0) x1 > x2 x1 > x2 x1 > x3 x2 > x3
(0,0) x1 > x2 x1 > x2 x1 > x3 x2 > x3
Each order takes 1
n! volume.
(0,0) x1 > x2 x1 > x2 x1 > x3 x2 > x3
Each order takes 1
n! volume.
Number of orders ≡ volume of intersection of partial order relations.
(0,0) x1 > x2 x1 > x2 x1 > x3 x2 > x3
Each order takes 1
n! volume.
Number of orders ≡ volume of intersection of partial order relations. Diameter: O(√n)
(0,0) x1 > x2 x1 > x2 x1 > x3 x2 > x3
Each order takes 1
n! volume.
Number of orders ≡ volume of intersection of partial order relations. Diameter: O(√n) Isoperimetry:
(0,0) x1 > x2 x1 > x2 x1 > x3 x2 > x3
Each order takes 1
n! volume.
Number of orders ≡ volume of intersection of partial order relations. Diameter: O(√n) Isoperimetry: Voln−1(S,S) = E(S,S)
(n−1)! ≥ |S| n!√n
(0,0) x1 > x2 x1 > x2 x1 > x3 x2 > x3
Each order takes 1
n! volume.
Number of orders ≡ volume of intersection of partial order relations. Diameter: O(√n) Isoperimetry: Voln−1(S,S) = E(S,S)
(n−1)! ≥ |S| n!√n
Edge Expansion: the degree d is O(n2),
(0,0) x1 > x2 x1 > x2 x1 > x3 x2 > x3
Each order takes 1
n! volume.
Number of orders ≡ volume of intersection of partial order relations. Diameter: O(√n) Isoperimetry: Voln−1(S,S) = E(S,S)
(n−1)! ≥ |S| n!√n
Edge Expansion: the degree d is O(n2), h(S) = |E(S,S)|
d|S|
≥
1 n7/2
(0,0) x1 > x2 x1 > x2 x1 > x3 x2 > x3
Each order takes 1
n! volume.
Number of orders ≡ volume of intersection of partial order relations. Diameter: O(√n) Isoperimetry: Voln−1(S,S) = E(S,S)
(n−1)! ≥ |S| n!√n
Edge Expansion: the degree d is O(n2), h(S) = |E(S,S)|
d|S|
≥
1 n7/2 Mixes in time O(n7 logN)
(0,0) x1 > x2 x1 > x2 x1 > x3 x2 > x3
Each order takes 1
n! volume.
Number of orders ≡ volume of intersection of partial order relations. Diameter: O(√n) Isoperimetry: Voln−1(S,S) = E(S,S)
(n−1)! ≥ |S| n!√n
Edge Expansion: the degree d is O(n2), h(S) = |E(S,S)|
d|S|
≥
1 n7/2 Mixes in time O(n7 logN) = O(n8 logn).
(0,0) x1 > x2 x1 > x2 x1 > x3 x2 > x3
Each order takes 1
n! volume.
Number of orders ≡ volume of intersection of partial order relations. Diameter: O(√n) Isoperimetry: Voln−1(S,S) = E(S,S)
(n−1)! ≥ |S| n!√n
Edge Expansion: the degree d is O(n2), h(S) = |E(S,S)|
d|S|
≥
1 n7/2 Mixes in time O(n7 logN) = O(n8 logn).
Do the polynomial dance!!!
Summary.
Eigenvectors for hypercubes.
Summary.
Eigenvectors for hypercubes. Tight example for LHI of Cheeger. Eigenvectors for cycle.
Summary.
Eigenvectors for hypercubes. Tight example for LHI of Cheeger. Eigenvectors for cycle. Tight example for RHI of Cheeger.
Summary.
Eigenvectors for hypercubes. Tight example for LHI of Cheeger. Eigenvectors for cycle. Tight example for RHI of Cheeger. Random Walks and Sampling.
Summary.
Eigenvectors for hypercubes. Tight example for LHI of Cheeger. Eigenvectors for cycle. Tight example for RHI of Cheeger. Random Walks and Sampling. Eigenvectors, Isoperimetry of Volume, Mixing.
Summary.
Eigenvectors for hypercubes. Tight example for LHI of Cheeger. Eigenvectors for cycle. Tight example for RHI of Cheeger. Random Walks and Sampling. Eigenvectors, Isoperimetry of Volume, Mixing. Partial Order Application.
Cheeger Hard Part.
Now let’s get to the hard part of Cheeger h(G) ≤
- 2(1−λ2).
Cheeger Hard Part.
Now let’s get to the hard part of Cheeger h(G) ≤
- 2(1−λ2).
Idea: We have 1−λ2 as a continuous relaxation of φ(G)
Cheeger Hard Part.
Now let’s get to the hard part of Cheeger h(G) ≤
- 2(1−λ2).
Idea: We have 1−λ2 as a continuous relaxation of φ(G) Take the 2nd eigenvector x = argminx∈RV −Span{1}
∑i,j Mij(xi−xj)2
1 n ∑i,j(xi−xj)2
Cheeger Hard Part.
Now let’s get to the hard part of Cheeger h(G) ≤
- 2(1−λ2).
Idea: We have 1−λ2 as a continuous relaxation of φ(G) Take the 2nd eigenvector x = argminx∈RV −Span{1}
∑i,j Mij(xi−xj)2
1 n ∑i,j(xi−xj)2
Consider x as an embedding of the vertices to the real line.
Cheeger Hard Part.
Now let’s get to the hard part of Cheeger h(G) ≤
- 2(1−λ2).
Idea: We have 1−λ2 as a continuous relaxation of φ(G) Take the 2nd eigenvector x = argminx∈RV −Span{1}
∑i,j Mij(xi−xj)2
1 n ∑i,j(xi−xj)2
Consider x as an embedding of the vertices to the real line. Round x to get a x ∈ {0,1}V
Cheeger Hard Part.
Now let’s get to the hard part of Cheeger h(G) ≤
- 2(1−λ2).
Idea: We have 1−λ2 as a continuous relaxation of φ(G) Take the 2nd eigenvector x = argminx∈RV −Span{1}
∑i,j Mij(xi−xj)2
1 n ∑i,j(xi−xj)2
Consider x as an embedding of the vertices to the real line. Round x to get a x ∈ {0,1}V Rounding: Take a threshold t,
- xi ≥ t
→ xi = 1 xi < t → xi = 0
Cheeger Hard Part.
Now let’s get to the hard part of Cheeger h(G) ≤
- 2(1−λ2).
Idea: We have 1−λ2 as a continuous relaxation of φ(G) Take the 2nd eigenvector x = argminx∈RV −Span{1}
∑i,j Mij(xi−xj)2
1 n ∑i,j(xi−xj)2
Consider x as an embedding of the vertices to the real line. Round x to get a x ∈ {0,1}V Rounding: Take a threshold t,
- xi ≥ t
→ xi = 1 xi < t → xi = 0 What will be a good t?
Cheeger Hard Part.
Now let’s get to the hard part of Cheeger h(G) ≤
- 2(1−λ2).
Idea: We have 1−λ2 as a continuous relaxation of φ(G) Take the 2nd eigenvector x = argminx∈RV −Span{1}
∑i,j Mij(xi−xj)2
1 n ∑i,j(xi−xj)2
Consider x as an embedding of the vertices to the real line. Round x to get a x ∈ {0,1}V Rounding: Take a threshold t,
- xi ≥ t
→ xi = 1 xi < t → xi = 0 What will be a good t? We don’t know. Try all possible thresholds (n −1 possibilities), and hope there is a t leading to a good cut!
Sweep Cut Algorithm
Input: G = (V,E), x ∈ RV,x ⊥ 1
Sweep Cut Algorithm
Input: G = (V,E), x ∈ RV,x ⊥ 1 Sort the vertices in non-decreasing order in terms of their values in x WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn
Sweep Cut Algorithm
Input: G = (V,E), x ∈ RV,x ⊥ 1 Sort the vertices in non-decreasing order in terms of their values in x WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn Let Si = {1,...,i} i = 1,...,n −1
Sweep Cut Algorithm
Input: G = (V,E), x ∈ RV,x ⊥ 1 Sort the vertices in non-decreasing order in terms of their values in x WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn Let Si = {1,...,i} i = 1,...,n −1 Return S = argminSi h(Si)
Sweep Cut Algorithm
Input: G = (V,E), x ∈ RV,x ⊥ 1 Sort the vertices in non-decreasing order in terms of their values in x WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn Let Si = {1,...,i} i = 1,...,n −1 Return S = argminSi h(Si) Main Lemma: G = (V,E), d-regular x ∈ RV,x ⊥ 1,µ = ∑i,j Mij(xi−xj)2
1 n ∑i,j(xi−xj)2
If S is the ouput of the sweep cut algorithm, then h(S) ≤
- 2µ
Sweep Cut Algorithm
Input: G = (V,E), x ∈ RV,x ⊥ 1 Sort the vertices in non-decreasing order in terms of their values in x WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn Let Si = {1,...,i} i = 1,...,n −1 Return S = argminSi h(Si) Main Lemma: G = (V,E), d-regular x ∈ RV,x ⊥ 1,µ = ∑i,j Mij(xi−xj)2
1 n ∑i,j(xi−xj)2
If S is the ouput of the sweep cut algorithm, then h(S) ≤
- 2µ
Note: Applying the Main Lemma with the 2nd eigenvector v2, we have µ = 1−λ2, and h(G) ≤ h(S) ≤
- 2(1−λ2). Done!
Proof of Main Lemma
WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn
Proof of Main Lemma
WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn Want to show ∃i s.t. h(Si) =
1 d |E(Si,V −Si)|
min(|Si|,|V −Si|) ≤
- 2µ
Proof of Main Lemma
WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn Want to show ∃i s.t. h(Si) =
1 d |E(Si,V −Si)|
min(|Si|,|V −Si|) ≤
- 2µ
Probabilistic Argument: Construct a distribution D over {S1,...,Sn−1} such that ES∼D[ 1
d |E(S,V −S)|]
ES∼D[min(|S|,|V −S|)] ≤
- 2µ
Proof of Main Lemma
WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn Want to show ∃i s.t. h(Si) =
1 d |E(Si,V −Si)|
min(|Si|,|V −Si|) ≤
- 2µ
Probabilistic Argument: Construct a distribution D over {S1,...,Sn−1} such that ES∼D[ 1
d |E(S,V −S)|]
ES∼D[min(|S|,|V −S|)] ≤
- 2µ
→ ES∼D[ 1
d |E(S,V −S)|−
- 2µmin(|S|,|V −S|)] ≤ 0
Proof of Main Lemma
WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn Want to show ∃i s.t. h(Si) =
1 d |E(Si,V −Si)|
min(|Si|,|V −Si|) ≤
- 2µ
Probabilistic Argument: Construct a distribution D over {S1,...,Sn−1} such that ES∼D[ 1
d |E(S,V −S)|]
ES∼D[min(|S|,|V −S|)] ≤
- 2µ
→ ES∼D[ 1
d |E(S,V −S)|−
- 2µmin(|S|,|V −S|)] ≤ 0
∃S
1 d |E(S,V −S)|−
- 2µmin(|S|,|V −S|) ≤ 0
The distribution D
WLOG, shift and scale so that x⌊ n
2 ⌋ = 0, and x2
1 +x2 n = 1
The distribution D
WLOG, shift and scale so that x⌊ n
2 ⌋ = 0, and x2
1 +x2 n = 1
Take t from the range [x1,xn] with density function f(t) = 2|t|.
The distribution D
WLOG, shift and scale so that x⌊ n
2 ⌋ = 0, and x2
1 +x2 n = 1
Take t from the range [x1,xn] with density function f(t) = 2|t|. Check:
xn
x1 f(t)dt = x1 −2tdt +
xn
0 2tdt = x2 1 +x2 n = 1
The distribution D
WLOG, shift and scale so that x⌊ n
2 ⌋ = 0, and x2
1 +x2 n = 1
Take t from the range [x1,xn] with density function f(t) = 2|t|. Check:
xn
x1 f(t)dt = x1 −2tdt +
xn
0 2tdt = x2 1 +x2 n = 1
S = {i : xi ≤ t}
The distribution D
WLOG, shift and scale so that x⌊ n
2 ⌋ = 0, and x2
1 +x2 n = 1
Take t from the range [x1,xn] with density function f(t) = 2|t|. Check:
xn
x1 f(t)dt = x1 −2tdt +
xn
0 2tdt = x2 1 +x2 n = 1
S = {i : xi ≤ t} Take D as the distribution over S1,...,Sn−1 from the above procedure.
Goal:
ES∼D[ 1
d |E(S,V−S)|]
ES∼D[min(|S|,|V−S|)] ≤
- 2µ
Goal:
ES∼D[ 1
d |E(S,V−S)|]
ES∼D[min(|S|,|V−S|)] ≤
- 2µ
Denominator:
Goal:
ES∼D[ 1
d |E(S,V−S)|]
ES∼D[min(|S|,|V−S|)] ≤
- 2µ
Denominator: Let Ti = indicator for “i is in the smaller set of S,V −S”
Goal:
ES∼D[ 1
d |E(S,V−S)|]
ES∼D[min(|S|,|V−S|)] ≤
- 2µ
Denominator: Let Ti = indicator for “i is in the smaller set of S,V −S” Can check ES∼D[Ti] = Pr[Ti = 1] = x2
i
ES∼D[min(|S|,|V −S|)] = ES∼D[∑
i
Ti] = ∑
i
ES∼D[Ti] = ∑
i
x2
i
Goal:
ES∼D[ 1
d |E(S,V−S)|]
ES∼D[min(|S|,|V−S|)] ≤
- 2µ
Goal:
ES∼D[ 1
d |E(S,V−S)|]
ES∼D[min(|S|,|V−S|)] ≤
- 2µ
Numerator:
Goal:
ES∼D[ 1
d |E(S,V−S)|]
ES∼D[min(|S|,|V−S|)] ≤
- 2µ
Numerator: Let Ti,j = indicator for i,j is cut by (S,V −S)
Goal:
ES∼D[ 1
d |E(S,V−S)|]
ES∼D[min(|S|,|V−S|)] ≤
- 2µ
Numerator: Let Ti,j = indicator for i,j is cut by (S,V −S)
- xi,xj same sign:
Pr[Ti,j = 1] = |x2
i −x2 j |
xi,xj different sign: Pr[Ti,j = 1] = x2
i +x2 j
Goal:
ES∼D[ 1
d |E(S,V−S)|]
ES∼D[min(|S|,|V−S|)] ≤
- 2µ
Numerator: Let Ti,j = indicator for i,j is cut by (S,V −S)
- xi,xj same sign:
Pr[Ti,j = 1] = |x2
i −x2 j |
xi,xj different sign: Pr[Ti,j = 1] = x2
i +x2 j
A common upper bound: E[Ti,j] = Pr[Ti,j = 1] ≤ |xi −xj|(|xi|+|xj|)
Goal:
ES∼D[ 1
d |E(S,V−S)|]
ES∼D[min(|S|,|V−S|)] ≤
- 2µ
Numerator: Let Ti,j = indicator for i,j is cut by (S,V −S)
- xi,xj same sign:
Pr[Ti,j = 1] = |x2
i −x2 j |
xi,xj different sign: Pr[Ti,j = 1] = x2
i +x2 j
A common upper bound: E[Ti,j] = Pr[Ti,j = 1] ≤ |xi −xj|(|xi|+|xj|) ES∼D[ 1 d |E(S,V −S)|] = 1 2 ∑
i,j
MijE[Ti,j] ≤ 1 2 ∑
i,j
Mij|xi −xj|(|xi|+|xj|)
Cauchy-Schwarz Inequality
|a·b| ≤ ab, as a·b = abcos(a,b)
Cauchy-Schwarz Inequality
|a·b| ≤ ab, as a·b = abcos(a,b) Applying with a,b ∈ Rn2 with aij = Mij|xi −xj|,bij = Mij|xi|+|xj|
Cauchy-Schwarz Inequality
|a·b| ≤ ab, as a·b = abcos(a,b) Applying with a,b ∈ Rn2 with aij = Mij|xi −xj|,bij = Mij|xi|+|xj| Numerator: ES∼D[ 1 d |E(S,V −S)|] = 1 2 ∑
i,j
MijE[Ti,j] ≤ 1 2 ∑
i,j
Mij|xi −xj|(|xi|+|xj|) = 1 2a·b ≤ 1 2ab
Recall µ = ∑i,j Mij(xi−xj)2
1 n ∑i,j(xi−xj)2 ,aij = Mij|xi −xj|,bij = Mij|xi|+|xj|
Recall µ = ∑i,j Mij(xi−xj)2
1 n ∑i,j(xi−xj)2 ,aij = Mij|xi −xj|,bij = Mij|xi|+|xj|
a2 = ∑
i,j
Mij(xi −xj)2 = µ n ∑
i,j
(xi −xj)2 = µ n ∑
i,j
(x2
i +x2 j )−∑ i,j
2xixj = µ n ∑
i,j
(x2
i +x2 j )−2(∑ i
xi)2 ≤ µ n ∑
i,j
(x2
i +x2 j ) = 2µ∑ i
x2
i
Recall µ = ∑i,j Mij(xi−xj)2
1 n ∑i,j(xi−xj)2 ,aij = Mij|xi −xj|,bij = Mij|xi|+|xj|
a2 = ∑
i,j
Mij(xi −xj)2 = µ n ∑
i,j
(xi −xj)2 = µ n ∑
i,j
(x2
i +x2 j )−∑ i,j
2xixj = µ n ∑
i,j
(x2
i +x2 j )−2(∑ i
xi)2 ≤ µ n ∑
i,j
(x2
i +x2 j ) = 2µ∑ i
x2
i
b2 = ∑
i,j
Mij(|xi|+|xj|)2 ≤ ∑
i,j
Mij(2x2
i +2x2 j )
= 4∑
i
x2
i
Goal:
ES∼D[ 1
d |E(S,V−S)|]
ES∼D[min(|S|,|V−S|)] ≤
- 2µ
Goal:
ES∼D[ 1
d |E(S,V−S)|]
ES∼D[min(|S|,|V−S|)] ≤
- 2µ
Numerator: ES∼D[ 1 d |E(S,V −S)|] =≤ 1 2ab ≤ 1 2
- 2µ∑
i
x2
i
- 4∑
i
x2
i
=
- 2µ∑
i
x2
i
Recall Denominator: ES∼D[min(|S|,|V −S|)] = ∑
i
x2
i
We get ES∼D[ 1
d |E(S,V −S)|]
ES∼D[min(|S|,|V −S|)] ≤
- 2µ
Goal:
ES∼D[ 1
d |E(S,V−S)|]
ES∼D[min(|S|,|V−S|)] ≤
- 2µ
Numerator: ES∼D[ 1 d |E(S,V −S)|] =≤ 1 2ab ≤ 1 2
- 2µ∑
i
x2
i
- 4∑
i
x2
i
=
- 2µ∑
i
x2
i
Recall Denominator: ES∼D[min(|S|,|V −S|)] = ∑
i
x2
i
We get ES∼D[ 1
d |E(S,V −S)|]
ES∼D[min(|S|,|V −S|)] ≤
- 2µ
Thus ∃Si such that h(Si) ≤
- 2µ, which gives h(G) ≤
- 2(1−λ)
Summary
Second largest eigenvlaue of matrix: λ2.
Summary
Second largest eigenvlaue of matrix: λ2. Bounds mixing time.
Summary
Second largest eigenvlaue of matrix: λ2. Bounds mixing time. Connected to “sparse” cuts.
Summary
Second largest eigenvlaue of matrix: λ2. Bounds mixing time. Connected to “sparse” cuts. Cheeger:
Summary
Second largest eigenvlaue of matrix: λ2. Bounds mixing time. Connected to “sparse” cuts. Cheeger: µ
2 ≤ h(G) ≤
- 2µ.
Summary
Second largest eigenvlaue of matrix: λ2. Bounds mixing time. Connected to “sparse” cuts. Cheeger: µ
2 ≤ h(G) ≤
- 2µ.
Left hand tight: Hypercube.
Summary
Second largest eigenvlaue of matrix: λ2. Bounds mixing time. Connected to “sparse” cuts. Cheeger: µ
2 ≤ h(G) ≤
- 2µ.
Left hand tight: Hypercube. Right hand tight: Cycle.
Summary
Second largest eigenvlaue of matrix: λ2. Bounds mixing time. Connected to “sparse” cuts. Cheeger: µ
2 ≤ h(G) ≤
- 2µ.
Left hand tight: Hypercube. Right hand tight: Cycle. Left side proof: produce good Rayleigh quotient vector from sparse cut.
Summary
Second largest eigenvlaue of matrix: λ2. Bounds mixing time. Connected to “sparse” cuts. Cheeger: µ
2 ≤ h(G) ≤
- 2µ.
Left hand tight: Hypercube. Right hand tight: Cycle. Left side proof: produce good Rayleigh quotient vector from sparse cut. Right hand proof: produce sparse cut from good Rayleigh quotient.
Summary
Second largest eigenvlaue of matrix: λ2. Bounds mixing time. Connected to “sparse” cuts. Cheeger: µ
2 ≤ h(G) ≤
- 2µ.