Today. Continue markov chain mixing analysis. Today. Continue - - PowerPoint PPT Presentation

today
SMART_READER_LITE
LIVE PREVIEW

Today. Continue markov chain mixing analysis. Today. Continue - - PowerPoint PPT Presentation

Today. Continue markov chain mixing analysis. Today. Continue markov chain mixing analysis. Prove hard side of Cheeger. Analyzing random walks on graph. Start at vertex, go to random neighbor. Analyzing random walks on graph. Start at


slide-1
SLIDE 1

Today.

Continue markov chain mixing analysis.

slide-2
SLIDE 2

Today.

Continue markov chain mixing analysis. Prove “hard side” of Cheeger.

slide-3
SLIDE 3

Analyzing random walks on graph.

Start at vertex, go to random neighbor.

slide-4
SLIDE 4

Analyzing random walks on graph.

Start at vertex, go to random neighbor. For d-regular graph: eventually uniform.

slide-5
SLIDE 5

Analyzing random walks on graph.

Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite.

slide-6
SLIDE 6

Analyzing random walks on graph.

Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd

slide-7
SLIDE 7

Analyzing random walks on graph.

Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step!

slide-8
SLIDE 8

Analyzing random walks on graph.

Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse?

slide-9
SLIDE 9

Analyzing random walks on graph.

Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M.

slide-10
SLIDE 10

Analyzing random walks on graph.

Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix.

slide-11
SLIDE 11

Analyzing random walks on graph.

Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1.

slide-12
SLIDE 12

Analyzing random walks on graph.

Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i.

slide-13
SLIDE 13

Analyzing random walks on graph.

Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt.

slide-14
SLIDE 14

Analyzing random walks on graph.

Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt

slide-15
SLIDE 15

Analyzing random walks on graph.

Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt Each node is average over neighbors.

slide-16
SLIDE 16

Analyzing random walks on graph.

Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt Each node is average over neighbors. Evolution? Random walk

slide-17
SLIDE 17

Analyzing random walks on graph.

Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt Each node is average over neighbors. Evolution? Random walk starts at 1,

slide-18
SLIDE 18

Analyzing random walks on graph.

Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt Each node is average over neighbors. Evolution? Random walk starts at 1, distribution e1 = [1,0,...,0].

slide-19
SLIDE 19

Analyzing random walks on graph.

Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt Each node is average over neighbors. Evolution? Random walk starts at 1, distribution e1 = [1,0,...,0]. Mtv1 = 1

N v1 +∑i>1 λ t i αivi.

slide-20
SLIDE 20

Analyzing random walks on graph.

Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt Each node is average over neighbors. Evolution? Random walk starts at 1, distribution e1 = [1,0,...,0]. Mtv1 = 1

N v1 +∑i>1 λ t i αivi.

v1 = [ 1

N ,..., 1 N ]

slide-21
SLIDE 21

Analyzing random walks on graph.

Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt Each node is average over neighbors. Evolution? Random walk starts at 1, distribution e1 = [1,0,...,0]. Mtv1 = 1

N v1 +∑i>1 λ t i αivi.

v1 = [ 1

N ,..., 1 N ] →Uniform distribution.

slide-22
SLIDE 22

Analyzing random walks on graph.

Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt Each node is average over neighbors. Evolution? Random walk starts at 1, distribution e1 = [1,0,...,0]. Mtv1 = 1

N v1 +∑i>1 λ t i αivi.

v1 = [ 1

N ,..., 1 N ] →Uniform distribution.

Doh!

slide-23
SLIDE 23

Analyzing random walks on graph.

Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt Each node is average over neighbors. Evolution? Random walk starts at 1, distribution e1 = [1,0,...,0]. Mtv1 = 1

N v1 +∑i>1 λ t i αivi.

v1 = [ 1

N ,..., 1 N ] →Uniform distribution.

Doh! What if bipartite?

slide-24
SLIDE 24

Analyzing random walks on graph.

Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt Each node is average over neighbors. Evolution? Random walk starts at 1, distribution e1 = [1,0,...,0]. Mtv1 = 1

N v1 +∑i>1 λ t i αivi.

v1 = [ 1

N ,..., 1 N ] →Uniform distribution.

Doh! What if bipartite? Negative eigenvalues of value -1:

slide-25
SLIDE 25

Analyzing random walks on graph.

Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt Each node is average over neighbors. Evolution? Random walk starts at 1, distribution e1 = [1,0,...,0]. Mtv1 = 1

N v1 +∑i>1 λ t i αivi.

v1 = [ 1

N ,..., 1 N ] →Uniform distribution.

Doh! What if bipartite? Negative eigenvalues of value -1: (+1,−1) on two sides.

slide-26
SLIDE 26

Analyzing random walks on graph.

Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt Each node is average over neighbors. Evolution? Random walk starts at 1, distribution e1 = [1,0,...,0]. Mtv1 = 1

N v1 +∑i>1 λ t i αivi.

v1 = [ 1

N ,..., 1 N ] →Uniform distribution.

Doh! What if bipartite? Negative eigenvalues of value -1: (+1,−1) on two sides. Side question: Why the same size?

slide-27
SLIDE 27

Analyzing random walks on graph.

Start at vertex, go to random neighbor. For d-regular graph: eventually uniform. if not bipartite. Odd /even step! How to analyse? Random Walk Matrix: M. M - normalized adjacency matrix. Symmetric, ∑j M[i,j] = 1. M[i,j]- probability of going to j from i. Probability distribution at time t: vt. vt+1 = Mvt Each node is average over neighbors. Evolution? Random walk starts at 1, distribution e1 = [1,0,...,0]. Mtv1 = 1

N v1 +∑i>1 λ t i αivi.

v1 = [ 1

N ,..., 1 N ] →Uniform distribution.

Doh! What if bipartite? Negative eigenvalues of value -1: (+1,−1) on two sides. Side question: Why the same size? Assumed regular graph.

slide-28
SLIDE 28

Fix-it-up chappie!

“Lazy” random walk:

slide-29
SLIDE 29

Fix-it-up chappie!

“Lazy” random walk: With probability 1/2 stay at current vertex.

slide-30
SLIDE 30

Fix-it-up chappie!

“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M

2

slide-31
SLIDE 31

Fix-it-up chappie!

“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M

2

Eigenvalues: 1+λi

2

slide-32
SLIDE 32

Fix-it-up chappie!

“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M

2

Eigenvalues: 1+λi

2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi

slide-33
SLIDE 33

Fix-it-up chappie!

“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M

2

Eigenvalues: 1+λi

2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi

Eigenvalues in interval [0,1].

slide-34
SLIDE 34

Fix-it-up chappie!

“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M

2

Eigenvalues: 1+λi

2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi

Eigenvalues in interval [0,1]. Spectral gap: 1−λ2

2

= µ

2 .

slide-35
SLIDE 35

Fix-it-up chappie!

“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M

2

Eigenvalues: 1+λi

2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi

Eigenvalues in interval [0,1]. Spectral gap: 1−λ2

2

= µ

2 .

Uniform distribution: π = [ 1

N ,..., 1 N ]

slide-36
SLIDE 36

Fix-it-up chappie!

“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M

2

Eigenvalues: 1+λi

2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi

Eigenvalues in interval [0,1]. Spectral gap: 1−λ2

2

= µ

2 .

Uniform distribution: π = [ 1

N ,..., 1 N ]

Distance to uniform:

slide-37
SLIDE 37

Fix-it-up chappie!

“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M

2

Eigenvalues: 1+λi

2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi

Eigenvalues in interval [0,1]. Spectral gap: 1−λ2

2

= µ

2 .

Uniform distribution: π = [ 1

N ,..., 1 N ]

Distance to uniform: d1(vt,π) = ∑i |(vt)i −πi|

slide-38
SLIDE 38

Fix-it-up chappie!

“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M

2

Eigenvalues: 1+λi

2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi

Eigenvalues in interval [0,1]. Spectral gap: 1−λ2

2

= µ

2 .

Uniform distribution: π = [ 1

N ,..., 1 N ]

Distance to uniform: d1(vt,π) = ∑i |(vt)i −πi| “Rapidly mixing”: d1(vt,π) ≤ ε in poly(logN,log 1

ε ) time.

slide-39
SLIDE 39

Fix-it-up chappie!

“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M

2

Eigenvalues: 1+λi

2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi

Eigenvalues in interval [0,1]. Spectral gap: 1−λ2

2

= µ

2 .

Uniform distribution: π = [ 1

N ,..., 1 N ]

Distance to uniform: d1(vt,π) = ∑i |(vt)i −πi| “Rapidly mixing”: d1(vt,π) ≤ ε in poly(logN,log 1

ε ) time.

When is chain rapidly mixing?

slide-40
SLIDE 40

Fix-it-up chappie!

“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M

2

Eigenvalues: 1+λi

2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi

Eigenvalues in interval [0,1]. Spectral gap: 1−λ2

2

= µ

2 .

Uniform distribution: π = [ 1

N ,..., 1 N ]

Distance to uniform: d1(vt,π) = ∑i |(vt)i −πi| “Rapidly mixing”: d1(vt,π) ≤ ε in poly(logN,log 1

ε ) time.

When is chain rapidly mixing? Another measure:

slide-41
SLIDE 41

Fix-it-up chappie!

“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M

2

Eigenvalues: 1+λi

2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi

Eigenvalues in interval [0,1]. Spectral gap: 1−λ2

2

= µ

2 .

Uniform distribution: π = [ 1

N ,..., 1 N ]

Distance to uniform: d1(vt,π) = ∑i |(vt)i −πi| “Rapidly mixing”: d1(vt,π) ≤ ε in poly(logN,log 1

ε ) time.

When is chain rapidly mixing? Another measure: d2(vt,π) = ∑i((vt)i −πi)2.

slide-42
SLIDE 42

Fix-it-up chappie!

“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M

2

Eigenvalues: 1+λi

2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi

Eigenvalues in interval [0,1]. Spectral gap: 1−λ2

2

= µ

2 .

Uniform distribution: π = [ 1

N ,..., 1 N ]

Distance to uniform: d1(vt,π) = ∑i |(vt)i −πi| “Rapidly mixing”: d1(vt,π) ≤ ε in poly(logN,log 1

ε ) time.

When is chain rapidly mixing? Another measure: d2(vt,π) = ∑i((vt)i −πi)2. Note: d1(vt,π) ≤ √ Nd2(vt,π)

slide-43
SLIDE 43

Fix-it-up chappie!

“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M

2

Eigenvalues: 1+λi

2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi

Eigenvalues in interval [0,1]. Spectral gap: 1−λ2

2

= µ

2 .

Uniform distribution: π = [ 1

N ,..., 1 N ]

Distance to uniform: d1(vt,π) = ∑i |(vt)i −πi| “Rapidly mixing”: d1(vt,π) ≤ ε in poly(logN,log 1

ε ) time.

When is chain rapidly mixing? Another measure: d2(vt,π) = ∑i((vt)i −πi)2. Note: d1(vt,π) ≤ √ Nd2(vt,π) n – “size” of vertex,

slide-44
SLIDE 44

Fix-it-up chappie!

“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M

2

Eigenvalues: 1+λi

2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi

Eigenvalues in interval [0,1]. Spectral gap: 1−λ2

2

= µ

2 .

Uniform distribution: π = [ 1

N ,..., 1 N ]

Distance to uniform: d1(vt,π) = ∑i |(vt)i −πi| “Rapidly mixing”: d1(vt,π) ≤ ε in poly(logN,log 1

ε ) time.

When is chain rapidly mixing? Another measure: d2(vt,π) = ∑i((vt)i −πi)2. Note: d1(vt,π) ≤ √ Nd2(vt,π) n – “size” of vertex, µ ≥

1 p(n) for poly p(n),

slide-45
SLIDE 45

Fix-it-up chappie!

“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M

2

Eigenvalues: 1+λi

2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi

Eigenvalues in interval [0,1]. Spectral gap: 1−λ2

2

= µ

2 .

Uniform distribution: π = [ 1

N ,..., 1 N ]

Distance to uniform: d1(vt,π) = ∑i |(vt)i −πi| “Rapidly mixing”: d1(vt,π) ≤ ε in poly(logN,log 1

ε ) time.

When is chain rapidly mixing? Another measure: d2(vt,π) = ∑i((vt)i −πi)2. Note: d1(vt,π) ≤ √ Nd2(vt,π) n – “size” of vertex, µ ≥

1 p(n) for poly p(n), t = O(p(n)logN).

slide-46
SLIDE 46

Fix-it-up chappie!

“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M

2

Eigenvalues: 1+λi

2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi

Eigenvalues in interval [0,1]. Spectral gap: 1−λ2

2

= µ

2 .

Uniform distribution: π = [ 1

N ,..., 1 N ]

Distance to uniform: d1(vt,π) = ∑i |(vt)i −πi| “Rapidly mixing”: d1(vt,π) ≤ ε in poly(logN,log 1

ε ) time.

When is chain rapidly mixing? Another measure: d2(vt,π) = ∑i((vt)i −πi)2. Note: d1(vt,π) ≤ √ Nd2(vt,π) n – “size” of vertex, µ ≥

1 p(n) for poly p(n), t = O(p(n)logN).

d2(vt,π) = |Ate1 −π|2 ≤

  • (1+λ2)

2

2t ≤ (1−

1 2p(n))2t ≤ 1

poly(N)

slide-47
SLIDE 47

Fix-it-up chappie!

“Lazy” random walk: With probability 1/2 stay at current vertex. Evolution Matrix: I+M

2

Eigenvalues: 1+λi

2 1 2(I +M)vi = 1 2(vi +λivi) = 1+λi 2 vi

Eigenvalues in interval [0,1]. Spectral gap: 1−λ2

2

= µ

2 .

Uniform distribution: π = [ 1

N ,..., 1 N ]

Distance to uniform: d1(vt,π) = ∑i |(vt)i −πi| “Rapidly mixing”: d1(vt,π) ≤ ε in poly(logN,log 1

ε ) time.

When is chain rapidly mixing? Another measure: d2(vt,π) = ∑i((vt)i −πi)2. Note: d1(vt,π) ≤ √ Nd2(vt,π) n – “size” of vertex, µ ≥

1 p(n) for poly p(n), t = O(p(n)logN).

d2(vt,π) = |Ate1 −π|2 ≤

  • (1+λ2)

2

2t ≤ (1−

1 2p(n))2t ≤ 1

poly(N) Rapidly mixing with big (≥

1 p(n)) spectral gap.

slide-48
SLIDE 48

Rapid mixing, volume, and surface area..

Recall volume of convex body.

slide-49
SLIDE 49

Rapid mixing, volume, and surface area..

Recall volume of convex body. Grid graph on grid points inside convex body.

slide-50
SLIDE 50

Rapid mixing, volume, and surface area..

Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger:

slide-51
SLIDE 51

Rapid mixing, volume, and surface area..

Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ

2 ≤ h(G) ≤

  • 2µ.
slide-52
SLIDE 52

Rapid mixing, volume, and surface area..

Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ

2 ≤ h(G) ≤

  • 2µ.

Lower bound expansion

slide-53
SLIDE 53

Rapid mixing, volume, and surface area..

Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ

2 ≤ h(G) ≤

  • 2µ.

Lower bound expansion → lower bounds on spectral gap µ

slide-54
SLIDE 54

Rapid mixing, volume, and surface area..

Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ

2 ≤ h(G) ≤

  • 2µ.

Lower bound expansion → lower bounds on spectral gap µ → Upper bound mixing time.

slide-55
SLIDE 55

Rapid mixing, volume, and surface area..

Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ

2 ≤ h(G) ≤

  • 2µ.

Lower bound expansion → lower bounds on spectral gap µ → Upper bound mixing time. h(G) ≈ Surface Area Volume

slide-56
SLIDE 56

Rapid mixing, volume, and surface area..

Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ

2 ≤ h(G) ≤

  • 2µ.

Lower bound expansion → lower bounds on spectral gap µ → Upper bound mixing time. h(G) ≈ Surface Area Volume

slide-57
SLIDE 57

Rapid mixing, volume, and surface area..

Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ

2 ≤ h(G) ≤

  • 2µ.

Lower bound expansion → lower bounds on spectral gap µ → Upper bound mixing time. h(G) ≈ Surface Area Volume Isoperimetric inequality.

slide-58
SLIDE 58

Rapid mixing, volume, and surface area..

Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ

2 ≤ h(G) ≤

  • 2µ.

Lower bound expansion → lower bounds on spectral gap µ → Upper bound mixing time. h(G) ≈ Surface Area Volume Isoperimetric inequality. Voln−1(S,S) ≥ min(Vol(S),Vol(S))

diam(P)

slide-59
SLIDE 59

Rapid mixing, volume, and surface area..

Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ

2 ≤ h(G) ≤

  • 2µ.

Lower bound expansion → lower bounds on spectral gap µ → Upper bound mixing time. h(G) ≈ Surface Area Volume Isoperimetric inequality. Voln−1(S,S) ≥ min(Vol(S),Vol(S))

diam(P)

slide-60
SLIDE 60

Rapid mixing, volume, and surface area..

Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ

2 ≤ h(G) ≤

  • 2µ.

Lower bound expansion → lower bounds on spectral gap µ → Upper bound mixing time. h(G) ≈ Surface Area Volume Isoperimetric inequality. Voln−1(S,S) ≥ min(Vol(S),Vol(S))

diam(P)

slide-61
SLIDE 61

Rapid mixing, volume, and surface area..

Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ

2 ≤ h(G) ≤

  • 2µ.

Lower bound expansion → lower bounds on spectral gap µ → Upper bound mixing time. h(G) ≈ Surface Area Volume Isoperimetric inequality. Voln−1(S,S) ≥ min(Vol(S),Vol(S))

diam(P)

Edges ∝ surface area,

slide-62
SLIDE 62

Rapid mixing, volume, and surface area..

Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ

2 ≤ h(G) ≤

  • 2µ.

Lower bound expansion → lower bounds on spectral gap µ → Upper bound mixing time. h(G) ≈ Surface Area Volume Isoperimetric inequality. Voln−1(S,S) ≥ min(Vol(S),Vol(S))

diam(P)

Edges ∝ surface area, Assume Diam(P) ≤ p′(n)

slide-63
SLIDE 63

Rapid mixing, volume, and surface area..

Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ

2 ≤ h(G) ≤

  • 2µ.

Lower bound expansion → lower bounds on spectral gap µ → Upper bound mixing time. h(G) ≈ Surface Area Volume Isoperimetric inequality. Voln−1(S,S) ≥ min(Vol(S),Vol(S))

diam(P)

Edges ∝ surface area, Assume Diam(P) ≤ p′(n) → h(G) ≥ 1/p′(n)

slide-64
SLIDE 64

Rapid mixing, volume, and surface area..

Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ

2 ≤ h(G) ≤

  • 2µ.

Lower bound expansion → lower bounds on spectral gap µ → Upper bound mixing time. h(G) ≈ Surface Area Volume Isoperimetric inequality. Voln−1(S,S) ≥ min(Vol(S),Vol(S))

diam(P)

Edges ∝ surface area, Assume Diam(P) ≤ p′(n) → h(G) ≥ 1/p′(n) → µ > 1/2p′(n)2

slide-65
SLIDE 65

Rapid mixing, volume, and surface area..

Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ

2 ≤ h(G) ≤

  • 2µ.

Lower bound expansion → lower bounds on spectral gap µ → Upper bound mixing time. h(G) ≈ Surface Area Volume Isoperimetric inequality. Voln−1(S,S) ≥ min(Vol(S),Vol(S))

diam(P)

Edges ∝ surface area, Assume Diam(P) ≤ p′(n) → h(G) ≥ 1/p′(n) → µ > 1/2p′(n)2 → O(p′(n)2 logN) convergence for Markov chain on BIG GRAPH.

slide-66
SLIDE 66

Rapid mixing, volume, and surface area..

Recall volume of convex body. Grid graph on grid points inside convex body. Recall Cheeger: µ

2 ≤ h(G) ≤

  • 2µ.

Lower bound expansion → lower bounds on spectral gap µ → Upper bound mixing time. h(G) ≈ Surface Area Volume Isoperimetric inequality. Voln−1(S,S) ≥ min(Vol(S),Vol(S))

diam(P)

Edges ∝ surface area, Assume Diam(P) ≤ p′(n) → h(G) ≥ 1/p′(n) → µ > 1/2p′(n)2 → O(p′(n)2 logN) convergence for Markov chain on BIG GRAPH. → Rapidly mixing chain:

slide-67
SLIDE 67

Khachiyan’s algorithm for counting partial orders.

Given partial order on x1,...,xn.

slide-68
SLIDE 68

Khachiyan’s algorithm for counting partial orders.

Given partial order on x1,...,xn. Sample from uniform distribution over total orders.

slide-69
SLIDE 69

Khachiyan’s algorithm for counting partial orders.

Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering.

slide-70
SLIDE 70

Khachiyan’s algorithm for counting partial orders.

Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair

slide-71
SLIDE 71

Khachiyan’s algorithm for counting partial orders.

Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order.

slide-72
SLIDE 72

Khachiyan’s algorithm for counting partial orders.

Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain?

slide-73
SLIDE 73

Khachiyan’s algorithm for counting partial orders.

Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain? Map into d-dimensional unit cube.

slide-74
SLIDE 74

Khachiyan’s algorithm for counting partial orders.

Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain? Map into d-dimensional unit cube. xi < xj corresponds to halfspace (one side of hyperplane) of cube.

slide-75
SLIDE 75

Khachiyan’s algorithm for counting partial orders.

Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain? Map into d-dimensional unit cube. xi < xj corresponds to halfspace (one side of hyperplane) of cube. “dimension i = dimension j”

slide-76
SLIDE 76

Khachiyan’s algorithm for counting partial orders.

Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain? Map into d-dimensional unit cube. xi < xj corresponds to halfspace (one side of hyperplane) of cube. “dimension i = dimension j” total order is intersection of n halfspaces.

slide-77
SLIDE 77

Khachiyan’s algorithm for counting partial orders.

Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain? Map into d-dimensional unit cube. xi < xj corresponds to halfspace (one side of hyperplane) of cube. “dimension i = dimension j” total order is intersection of n halfspaces. each of volume: 1

n!.

slide-78
SLIDE 78

Khachiyan’s algorithm for counting partial orders.

Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain? Map into d-dimensional unit cube. xi < xj corresponds to halfspace (one side of hyperplane) of cube. “dimension i = dimension j” total order is intersection of n halfspaces. each of volume: 1

n!.

since each total order is disjoint

slide-79
SLIDE 79

Khachiyan’s algorithm for counting partial orders.

Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain? Map into d-dimensional unit cube. xi < xj corresponds to halfspace (one side of hyperplane) of cube. “dimension i = dimension j” total order is intersection of n halfspaces. each of volume: 1

n!.

since each total order is disjoint and together cover cube.

slide-80
SLIDE 80

Khachiyan’s algorithm for counting partial orders.

Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain? Map into d-dimensional unit cube. xi < xj corresponds to halfspace (one side of hyperplane) of cube. “dimension i = dimension j” total order is intersection of n halfspaces. each of volume: 1

n!.

since each total order is disjoint and together cover cube.

(0,0)

slide-81
SLIDE 81

Khachiyan’s algorithm for counting partial orders.

Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain? Map into d-dimensional unit cube. xi < xj corresponds to halfspace (one side of hyperplane) of cube. “dimension i = dimension j” total order is intersection of n halfspaces. each of volume: 1

n!.

since each total order is disjoint and together cover cube.

(0,0) x1 > x2

slide-82
SLIDE 82

Khachiyan’s algorithm for counting partial orders.

Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain? Map into d-dimensional unit cube. xi < xj corresponds to halfspace (one side of hyperplane) of cube. “dimension i = dimension j” total order is intersection of n halfspaces. each of volume: 1

n!.

since each total order is disjoint and together cover cube.

(0,0) x1 > x2 x1 > x2

slide-83
SLIDE 83

Khachiyan’s algorithm for counting partial orders.

Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain? Map into d-dimensional unit cube. xi < xj corresponds to halfspace (one side of hyperplane) of cube. “dimension i = dimension j” total order is intersection of n halfspaces. each of volume: 1

n!.

since each total order is disjoint and together cover cube.

(0,0) x1 > x2 x1 > x2 x1 > x3

slide-84
SLIDE 84

Khachiyan’s algorithm for counting partial orders.

Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain? Map into d-dimensional unit cube. xi < xj corresponds to halfspace (one side of hyperplane) of cube. “dimension i = dimension j” total order is intersection of n halfspaces. each of volume: 1

n!.

since each total order is disjoint and together cover cube.

(0,0) x1 > x2 x1 > x2 x1 > x3 x2 > x3

slide-85
SLIDE 85

Khachiyan’s algorithm for counting partial orders.

Given partial order on x1,...,xn. Sample from uniform distribution over total orders. Start at an ordering. Swap random pair and go if consistent with partial order. Rapidly mixing chain? Map into d-dimensional unit cube. xi < xj corresponds to halfspace (one side of hyperplane) of cube. “dimension i = dimension j” total order is intersection of n halfspaces. each of volume: 1

n!.

since each total order is disjoint and together cover cube.

(0,0) x1 > x2 x1 > x2 x1 > x3 x2 > x3

slide-86
SLIDE 86

(0,0)

slide-87
SLIDE 87

(0,0) x1 > x2

slide-88
SLIDE 88

(0,0) x1 > x2 x1 > x2

slide-89
SLIDE 89

(0,0) x1 > x2 x1 > x2 x1 > x3

slide-90
SLIDE 90

(0,0) x1 > x2 x1 > x2 x1 > x3 x2 > x3

slide-91
SLIDE 91

(0,0) x1 > x2 x1 > x2 x1 > x3 x2 > x3

Each order takes 1

n! volume.

slide-92
SLIDE 92

(0,0) x1 > x2 x1 > x2 x1 > x3 x2 > x3

Each order takes 1

n! volume.

Number of orders ≡ volume of intersection of partial order relations.

slide-93
SLIDE 93

(0,0) x1 > x2 x1 > x2 x1 > x3 x2 > x3

Each order takes 1

n! volume.

Number of orders ≡ volume of intersection of partial order relations. Diameter: O(√n)

slide-94
SLIDE 94

(0,0) x1 > x2 x1 > x2 x1 > x3 x2 > x3

Each order takes 1

n! volume.

Number of orders ≡ volume of intersection of partial order relations. Diameter: O(√n) Isoperimetry:

slide-95
SLIDE 95

(0,0) x1 > x2 x1 > x2 x1 > x3 x2 > x3

Each order takes 1

n! volume.

Number of orders ≡ volume of intersection of partial order relations. Diameter: O(√n) Isoperimetry: Voln−1(S,S) = E(S,S)

(n−1)! ≥ |S| n!√n

slide-96
SLIDE 96

(0,0) x1 > x2 x1 > x2 x1 > x3 x2 > x3

Each order takes 1

n! volume.

Number of orders ≡ volume of intersection of partial order relations. Diameter: O(√n) Isoperimetry: Voln−1(S,S) = E(S,S)

(n−1)! ≥ |S| n!√n

Edge Expansion: the degree d is O(n2),

slide-97
SLIDE 97

(0,0) x1 > x2 x1 > x2 x1 > x3 x2 > x3

Each order takes 1

n! volume.

Number of orders ≡ volume of intersection of partial order relations. Diameter: O(√n) Isoperimetry: Voln−1(S,S) = E(S,S)

(n−1)! ≥ |S| n!√n

Edge Expansion: the degree d is O(n2), h(S) = |E(S,S)|

d|S|

1 n7/2

slide-98
SLIDE 98

(0,0) x1 > x2 x1 > x2 x1 > x3 x2 > x3

Each order takes 1

n! volume.

Number of orders ≡ volume of intersection of partial order relations. Diameter: O(√n) Isoperimetry: Voln−1(S,S) = E(S,S)

(n−1)! ≥ |S| n!√n

Edge Expansion: the degree d is O(n2), h(S) = |E(S,S)|

d|S|

1 n7/2 Mixes in time O(n7 logN)

slide-99
SLIDE 99

(0,0) x1 > x2 x1 > x2 x1 > x3 x2 > x3

Each order takes 1

n! volume.

Number of orders ≡ volume of intersection of partial order relations. Diameter: O(√n) Isoperimetry: Voln−1(S,S) = E(S,S)

(n−1)! ≥ |S| n!√n

Edge Expansion: the degree d is O(n2), h(S) = |E(S,S)|

d|S|

1 n7/2 Mixes in time O(n7 logN) = O(n8 logn).

slide-100
SLIDE 100

(0,0) x1 > x2 x1 > x2 x1 > x3 x2 > x3

Each order takes 1

n! volume.

Number of orders ≡ volume of intersection of partial order relations. Diameter: O(√n) Isoperimetry: Voln−1(S,S) = E(S,S)

(n−1)! ≥ |S| n!√n

Edge Expansion: the degree d is O(n2), h(S) = |E(S,S)|

d|S|

1 n7/2 Mixes in time O(n7 logN) = O(n8 logn).

Do the polynomial dance!!!

slide-101
SLIDE 101

Summary.

Eigenvectors for hypercubes.

slide-102
SLIDE 102

Summary.

Eigenvectors for hypercubes. Tight example for LHI of Cheeger. Eigenvectors for cycle.

slide-103
SLIDE 103

Summary.

Eigenvectors for hypercubes. Tight example for LHI of Cheeger. Eigenvectors for cycle. Tight example for RHI of Cheeger.

slide-104
SLIDE 104

Summary.

Eigenvectors for hypercubes. Tight example for LHI of Cheeger. Eigenvectors for cycle. Tight example for RHI of Cheeger. Random Walks and Sampling.

slide-105
SLIDE 105

Summary.

Eigenvectors for hypercubes. Tight example for LHI of Cheeger. Eigenvectors for cycle. Tight example for RHI of Cheeger. Random Walks and Sampling. Eigenvectors, Isoperimetry of Volume, Mixing.

slide-106
SLIDE 106

Summary.

Eigenvectors for hypercubes. Tight example for LHI of Cheeger. Eigenvectors for cycle. Tight example for RHI of Cheeger. Random Walks and Sampling. Eigenvectors, Isoperimetry of Volume, Mixing. Partial Order Application.

slide-107
SLIDE 107

Cheeger Hard Part.

Now let’s get to the hard part of Cheeger h(G) ≤

  • 2(1−λ2).
slide-108
SLIDE 108

Cheeger Hard Part.

Now let’s get to the hard part of Cheeger h(G) ≤

  • 2(1−λ2).

Idea: We have 1−λ2 as a continuous relaxation of φ(G)

slide-109
SLIDE 109

Cheeger Hard Part.

Now let’s get to the hard part of Cheeger h(G) ≤

  • 2(1−λ2).

Idea: We have 1−λ2 as a continuous relaxation of φ(G) Take the 2nd eigenvector x = argminx∈RV −Span{1}

∑i,j Mij(xi−xj)2

1 n ∑i,j(xi−xj)2

slide-110
SLIDE 110

Cheeger Hard Part.

Now let’s get to the hard part of Cheeger h(G) ≤

  • 2(1−λ2).

Idea: We have 1−λ2 as a continuous relaxation of φ(G) Take the 2nd eigenvector x = argminx∈RV −Span{1}

∑i,j Mij(xi−xj)2

1 n ∑i,j(xi−xj)2

Consider x as an embedding of the vertices to the real line.

slide-111
SLIDE 111

Cheeger Hard Part.

Now let’s get to the hard part of Cheeger h(G) ≤

  • 2(1−λ2).

Idea: We have 1−λ2 as a continuous relaxation of φ(G) Take the 2nd eigenvector x = argminx∈RV −Span{1}

∑i,j Mij(xi−xj)2

1 n ∑i,j(xi−xj)2

Consider x as an embedding of the vertices to the real line. Round x to get a x ∈ {0,1}V

slide-112
SLIDE 112

Cheeger Hard Part.

Now let’s get to the hard part of Cheeger h(G) ≤

  • 2(1−λ2).

Idea: We have 1−λ2 as a continuous relaxation of φ(G) Take the 2nd eigenvector x = argminx∈RV −Span{1}

∑i,j Mij(xi−xj)2

1 n ∑i,j(xi−xj)2

Consider x as an embedding of the vertices to the real line. Round x to get a x ∈ {0,1}V Rounding: Take a threshold t,

  • xi ≥ t

→ xi = 1 xi < t → xi = 0

slide-113
SLIDE 113

Cheeger Hard Part.

Now let’s get to the hard part of Cheeger h(G) ≤

  • 2(1−λ2).

Idea: We have 1−λ2 as a continuous relaxation of φ(G) Take the 2nd eigenvector x = argminx∈RV −Span{1}

∑i,j Mij(xi−xj)2

1 n ∑i,j(xi−xj)2

Consider x as an embedding of the vertices to the real line. Round x to get a x ∈ {0,1}V Rounding: Take a threshold t,

  • xi ≥ t

→ xi = 1 xi < t → xi = 0 What will be a good t?

slide-114
SLIDE 114

Cheeger Hard Part.

Now let’s get to the hard part of Cheeger h(G) ≤

  • 2(1−λ2).

Idea: We have 1−λ2 as a continuous relaxation of φ(G) Take the 2nd eigenvector x = argminx∈RV −Span{1}

∑i,j Mij(xi−xj)2

1 n ∑i,j(xi−xj)2

Consider x as an embedding of the vertices to the real line. Round x to get a x ∈ {0,1}V Rounding: Take a threshold t,

  • xi ≥ t

→ xi = 1 xi < t → xi = 0 What will be a good t? We don’t know. Try all possible thresholds (n −1 possibilities), and hope there is a t leading to a good cut!

slide-115
SLIDE 115

Sweep Cut Algorithm

Input: G = (V,E), x ∈ RV,x ⊥ 1

slide-116
SLIDE 116

Sweep Cut Algorithm

Input: G = (V,E), x ∈ RV,x ⊥ 1 Sort the vertices in non-decreasing order in terms of their values in x WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn

slide-117
SLIDE 117

Sweep Cut Algorithm

Input: G = (V,E), x ∈ RV,x ⊥ 1 Sort the vertices in non-decreasing order in terms of their values in x WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn Let Si = {1,...,i} i = 1,...,n −1

slide-118
SLIDE 118

Sweep Cut Algorithm

Input: G = (V,E), x ∈ RV,x ⊥ 1 Sort the vertices in non-decreasing order in terms of their values in x WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn Let Si = {1,...,i} i = 1,...,n −1 Return S = argminSi h(Si)

slide-119
SLIDE 119

Sweep Cut Algorithm

Input: G = (V,E), x ∈ RV,x ⊥ 1 Sort the vertices in non-decreasing order in terms of their values in x WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn Let Si = {1,...,i} i = 1,...,n −1 Return S = argminSi h(Si) Main Lemma: G = (V,E), d-regular x ∈ RV,x ⊥ 1,µ = ∑i,j Mij(xi−xj)2

1 n ∑i,j(xi−xj)2

If S is the ouput of the sweep cut algorithm, then h(S) ≤

slide-120
SLIDE 120

Sweep Cut Algorithm

Input: G = (V,E), x ∈ RV,x ⊥ 1 Sort the vertices in non-decreasing order in terms of their values in x WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn Let Si = {1,...,i} i = 1,...,n −1 Return S = argminSi h(Si) Main Lemma: G = (V,E), d-regular x ∈ RV,x ⊥ 1,µ = ∑i,j Mij(xi−xj)2

1 n ∑i,j(xi−xj)2

If S is the ouput of the sweep cut algorithm, then h(S) ≤

Note: Applying the Main Lemma with the 2nd eigenvector v2, we have µ = 1−λ2, and h(G) ≤ h(S) ≤

  • 2(1−λ2). Done!
slide-121
SLIDE 121

Proof of Main Lemma

WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn

slide-122
SLIDE 122

Proof of Main Lemma

WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn Want to show ∃i s.t. h(Si) =

1 d |E(Si,V −Si)|

min(|Si|,|V −Si|) ≤

slide-123
SLIDE 123

Proof of Main Lemma

WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn Want to show ∃i s.t. h(Si) =

1 d |E(Si,V −Si)|

min(|Si|,|V −Si|) ≤

Probabilistic Argument: Construct a distribution D over {S1,...,Sn−1} such that ES∼D[ 1

d |E(S,V −S)|]

ES∼D[min(|S|,|V −S|)] ≤

slide-124
SLIDE 124

Proof of Main Lemma

WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn Want to show ∃i s.t. h(Si) =

1 d |E(Si,V −Si)|

min(|Si|,|V −Si|) ≤

Probabilistic Argument: Construct a distribution D over {S1,...,Sn−1} such that ES∼D[ 1

d |E(S,V −S)|]

ES∼D[min(|S|,|V −S|)] ≤

→ ES∼D[ 1

d |E(S,V −S)|−

  • 2µmin(|S|,|V −S|)] ≤ 0
slide-125
SLIDE 125

Proof of Main Lemma

WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn Want to show ∃i s.t. h(Si) =

1 d |E(Si,V −Si)|

min(|Si|,|V −Si|) ≤

Probabilistic Argument: Construct a distribution D over {S1,...,Sn−1} such that ES∼D[ 1

d |E(S,V −S)|]

ES∼D[min(|S|,|V −S|)] ≤

→ ES∼D[ 1

d |E(S,V −S)|−

  • 2µmin(|S|,|V −S|)] ≤ 0

∃S

1 d |E(S,V −S)|−

  • 2µmin(|S|,|V −S|) ≤ 0
slide-126
SLIDE 126

The distribution D

WLOG, shift and scale so that x⌊ n

2 ⌋ = 0, and x2

1 +x2 n = 1

slide-127
SLIDE 127

The distribution D

WLOG, shift and scale so that x⌊ n

2 ⌋ = 0, and x2

1 +x2 n = 1

Take t from the range [x1,xn] with density function f(t) = 2|t|.

slide-128
SLIDE 128

The distribution D

WLOG, shift and scale so that x⌊ n

2 ⌋ = 0, and x2

1 +x2 n = 1

Take t from the range [x1,xn] with density function f(t) = 2|t|. Check:

xn

x1 f(t)dt = x1 −2tdt +

xn

0 2tdt = x2 1 +x2 n = 1

slide-129
SLIDE 129

The distribution D

WLOG, shift and scale so that x⌊ n

2 ⌋ = 0, and x2

1 +x2 n = 1

Take t from the range [x1,xn] with density function f(t) = 2|t|. Check:

xn

x1 f(t)dt = x1 −2tdt +

xn

0 2tdt = x2 1 +x2 n = 1

S = {i : xi ≤ t}

slide-130
SLIDE 130

The distribution D

WLOG, shift and scale so that x⌊ n

2 ⌋ = 0, and x2

1 +x2 n = 1

Take t from the range [x1,xn] with density function f(t) = 2|t|. Check:

xn

x1 f(t)dt = x1 −2tdt +

xn

0 2tdt = x2 1 +x2 n = 1

S = {i : xi ≤ t} Take D as the distribution over S1,...,Sn−1 from the above procedure.

slide-131
SLIDE 131

Goal:

ES∼D[ 1

d |E(S,V−S)|]

ES∼D[min(|S|,|V−S|)] ≤

slide-132
SLIDE 132

Goal:

ES∼D[ 1

d |E(S,V−S)|]

ES∼D[min(|S|,|V−S|)] ≤

Denominator:

slide-133
SLIDE 133

Goal:

ES∼D[ 1

d |E(S,V−S)|]

ES∼D[min(|S|,|V−S|)] ≤

Denominator: Let Ti = indicator for “i is in the smaller set of S,V −S”

slide-134
SLIDE 134

Goal:

ES∼D[ 1

d |E(S,V−S)|]

ES∼D[min(|S|,|V−S|)] ≤

Denominator: Let Ti = indicator for “i is in the smaller set of S,V −S” Can check ES∼D[Ti] = Pr[Ti = 1] = x2

i

ES∼D[min(|S|,|V −S|)] = ES∼D[∑

i

Ti] = ∑

i

ES∼D[Ti] = ∑

i

x2

i

slide-135
SLIDE 135

Goal:

ES∼D[ 1

d |E(S,V−S)|]

ES∼D[min(|S|,|V−S|)] ≤

slide-136
SLIDE 136

Goal:

ES∼D[ 1

d |E(S,V−S)|]

ES∼D[min(|S|,|V−S|)] ≤

Numerator:

slide-137
SLIDE 137

Goal:

ES∼D[ 1

d |E(S,V−S)|]

ES∼D[min(|S|,|V−S|)] ≤

Numerator: Let Ti,j = indicator for i,j is cut by (S,V −S)

slide-138
SLIDE 138

Goal:

ES∼D[ 1

d |E(S,V−S)|]

ES∼D[min(|S|,|V−S|)] ≤

Numerator: Let Ti,j = indicator for i,j is cut by (S,V −S)

  • xi,xj same sign:

Pr[Ti,j = 1] = |x2

i −x2 j |

xi,xj different sign: Pr[Ti,j = 1] = x2

i +x2 j

slide-139
SLIDE 139

Goal:

ES∼D[ 1

d |E(S,V−S)|]

ES∼D[min(|S|,|V−S|)] ≤

Numerator: Let Ti,j = indicator for i,j is cut by (S,V −S)

  • xi,xj same sign:

Pr[Ti,j = 1] = |x2

i −x2 j |

xi,xj different sign: Pr[Ti,j = 1] = x2

i +x2 j

A common upper bound: E[Ti,j] = Pr[Ti,j = 1] ≤ |xi −xj|(|xi|+|xj|)

slide-140
SLIDE 140

Goal:

ES∼D[ 1

d |E(S,V−S)|]

ES∼D[min(|S|,|V−S|)] ≤

Numerator: Let Ti,j = indicator for i,j is cut by (S,V −S)

  • xi,xj same sign:

Pr[Ti,j = 1] = |x2

i −x2 j |

xi,xj different sign: Pr[Ti,j = 1] = x2

i +x2 j

A common upper bound: E[Ti,j] = Pr[Ti,j = 1] ≤ |xi −xj|(|xi|+|xj|) ES∼D[ 1 d |E(S,V −S)|] = 1 2 ∑

i,j

MijE[Ti,j] ≤ 1 2 ∑

i,j

Mij|xi −xj|(|xi|+|xj|)

slide-141
SLIDE 141

Cauchy-Schwarz Inequality

|a·b| ≤ ab, as a·b = abcos(a,b)

slide-142
SLIDE 142

Cauchy-Schwarz Inequality

|a·b| ≤ ab, as a·b = abcos(a,b) Applying with a,b ∈ Rn2 with aij = Mij|xi −xj|,bij = Mij|xi|+|xj|

slide-143
SLIDE 143

Cauchy-Schwarz Inequality

|a·b| ≤ ab, as a·b = abcos(a,b) Applying with a,b ∈ Rn2 with aij = Mij|xi −xj|,bij = Mij|xi|+|xj| Numerator: ES∼D[ 1 d |E(S,V −S)|] = 1 2 ∑

i,j

MijE[Ti,j] ≤ 1 2 ∑

i,j

Mij|xi −xj|(|xi|+|xj|) = 1 2a·b ≤ 1 2ab

slide-144
SLIDE 144

Recall µ = ∑i,j Mij(xi−xj)2

1 n ∑i,j(xi−xj)2 ,aij = Mij|xi −xj|,bij = Mij|xi|+|xj|

slide-145
SLIDE 145

Recall µ = ∑i,j Mij(xi−xj)2

1 n ∑i,j(xi−xj)2 ,aij = Mij|xi −xj|,bij = Mij|xi|+|xj|

a2 = ∑

i,j

Mij(xi −xj)2 = µ n ∑

i,j

(xi −xj)2 = µ n ∑

i,j

(x2

i +x2 j )−∑ i,j

2xixj = µ n ∑

i,j

(x2

i +x2 j )−2(∑ i

xi)2 ≤ µ n ∑

i,j

(x2

i +x2 j ) = 2µ∑ i

x2

i

slide-146
SLIDE 146

Recall µ = ∑i,j Mij(xi−xj)2

1 n ∑i,j(xi−xj)2 ,aij = Mij|xi −xj|,bij = Mij|xi|+|xj|

a2 = ∑

i,j

Mij(xi −xj)2 = µ n ∑

i,j

(xi −xj)2 = µ n ∑

i,j

(x2

i +x2 j )−∑ i,j

2xixj = µ n ∑

i,j

(x2

i +x2 j )−2(∑ i

xi)2 ≤ µ n ∑

i,j

(x2

i +x2 j ) = 2µ∑ i

x2

i

b2 = ∑

i,j

Mij(|xi|+|xj|)2 ≤ ∑

i,j

Mij(2x2

i +2x2 j )

= 4∑

i

x2

i

slide-147
SLIDE 147

Goal:

ES∼D[ 1

d |E(S,V−S)|]

ES∼D[min(|S|,|V−S|)] ≤

slide-148
SLIDE 148

Goal:

ES∼D[ 1

d |E(S,V−S)|]

ES∼D[min(|S|,|V−S|)] ≤

Numerator: ES∼D[ 1 d |E(S,V −S)|] =≤ 1 2ab ≤ 1 2

  • 2µ∑

i

x2

i

  • 4∑

i

x2

i

=

  • 2µ∑

i

x2

i

Recall Denominator: ES∼D[min(|S|,|V −S|)] = ∑

i

x2

i

We get ES∼D[ 1

d |E(S,V −S)|]

ES∼D[min(|S|,|V −S|)] ≤

slide-149
SLIDE 149

Goal:

ES∼D[ 1

d |E(S,V−S)|]

ES∼D[min(|S|,|V−S|)] ≤

Numerator: ES∼D[ 1 d |E(S,V −S)|] =≤ 1 2ab ≤ 1 2

  • 2µ∑

i

x2

i

  • 4∑

i

x2

i

=

  • 2µ∑

i

x2

i

Recall Denominator: ES∼D[min(|S|,|V −S|)] = ∑

i

x2

i

We get ES∼D[ 1

d |E(S,V −S)|]

ES∼D[min(|S|,|V −S|)] ≤

Thus ∃Si such that h(Si) ≤

  • 2µ, which gives h(G) ≤
  • 2(1−λ)
slide-150
SLIDE 150

Summary

Second largest eigenvlaue of matrix: λ2.

slide-151
SLIDE 151

Summary

Second largest eigenvlaue of matrix: λ2. Bounds mixing time.

slide-152
SLIDE 152

Summary

Second largest eigenvlaue of matrix: λ2. Bounds mixing time. Connected to “sparse” cuts.

slide-153
SLIDE 153

Summary

Second largest eigenvlaue of matrix: λ2. Bounds mixing time. Connected to “sparse” cuts. Cheeger:

slide-154
SLIDE 154

Summary

Second largest eigenvlaue of matrix: λ2. Bounds mixing time. Connected to “sparse” cuts. Cheeger: µ

2 ≤ h(G) ≤

  • 2µ.
slide-155
SLIDE 155

Summary

Second largest eigenvlaue of matrix: λ2. Bounds mixing time. Connected to “sparse” cuts. Cheeger: µ

2 ≤ h(G) ≤

  • 2µ.

Left hand tight: Hypercube.

slide-156
SLIDE 156

Summary

Second largest eigenvlaue of matrix: λ2. Bounds mixing time. Connected to “sparse” cuts. Cheeger: µ

2 ≤ h(G) ≤

  • 2µ.

Left hand tight: Hypercube. Right hand tight: Cycle.

slide-157
SLIDE 157

Summary

Second largest eigenvlaue of matrix: λ2. Bounds mixing time. Connected to “sparse” cuts. Cheeger: µ

2 ≤ h(G) ≤

  • 2µ.

Left hand tight: Hypercube. Right hand tight: Cycle. Left side proof: produce good Rayleigh quotient vector from sparse cut.

slide-158
SLIDE 158

Summary

Second largest eigenvlaue of matrix: λ2. Bounds mixing time. Connected to “sparse” cuts. Cheeger: µ

2 ≤ h(G) ≤

  • 2µ.

Left hand tight: Hypercube. Right hand tight: Cycle. Left side proof: produce good Rayleigh quotient vector from sparse cut. Right hand proof: produce sparse cut from good Rayleigh quotient.

slide-159
SLIDE 159

Summary

Second largest eigenvlaue of matrix: λ2. Bounds mixing time. Connected to “sparse” cuts. Cheeger: µ

2 ≤ h(G) ≤

  • 2µ.

Left hand tight: Hypercube. Right hand tight: Cycle. Left side proof: produce good Rayleigh quotient vector from sparse cut. Right hand proof: produce sparse cut from good Rayleigh quotient. Connect to bounding mixing time on Markov Chain.