Welcome back. Today. Welcome back. Today. Review: Spectral gap, - - PowerPoint PPT Presentation

welcome back
SMART_READER_LITE
LIVE PREVIEW

Welcome back. Today. Welcome back. Today. Review: Spectral gap, - - PowerPoint PPT Presentation

Welcome back. Today. Welcome back. Today. Review: Spectral gap, Edge expansion h ( G ) , Sparsity ( G ) etc. Welcome back. Today. Review: Spectral gap, Edge expansion h ( G ) , Sparsity ( G ) etc. Write 1 2 as a relaxation of (


slide-1
SLIDE 1

Welcome back.

Today.

slide-2
SLIDE 2

Welcome back.

Today. Review: Spectral gap, Edge expansion h(G), Sparsity φ(G) etc.

slide-3
SLIDE 3

Welcome back.

Today. Review: Spectral gap, Edge expansion h(G), Sparsity φ(G) etc. Write 1−λ2 as a relaxation of φ(G), Cheeger easy part

slide-4
SLIDE 4

Welcome back.

Today. Review: Spectral gap, Edge expansion h(G), Sparsity φ(G) etc. Write 1−λ2 as a relaxation of φ(G), Cheeger easy part Cheeger hard part: Sweeping cut Algorithm, Proof, Asymptotic tight example

slide-5
SLIDE 5

Welcome back.

Today. Review: Spectral gap, Edge expansion h(G), Sparsity φ(G) etc. Write 1−λ2 as a relaxation of φ(G), Cheeger easy part Cheeger hard part: Sweeping cut Algorithm, Proof, Asymptotic tight example

slide-6
SLIDE 6

Edge Expansion/Conductance.

Graph G = (V,E),

slide-7
SLIDE 7

Edge Expansion/Conductance.

Graph G = (V,E), Assume regular graph of degree d.

slide-8
SLIDE 8

Edge Expansion/Conductance.

Graph G = (V,E), Assume regular graph of degree d. Edge Expansion.

slide-9
SLIDE 9

Edge Expansion/Conductance.

Graph G = (V,E), Assume regular graph of degree d. Edge Expansion. h(S) =

|E(S,V−S)| d min(|S|,|V−S|), h(G) = minS⊂V h(S)

slide-10
SLIDE 10

Edge Expansion/Conductance.

Graph G = (V,E), Assume regular graph of degree d. Edge Expansion. h(S) =

|E(S,V−S)| d min(|S|,|V−S|), h(G) = minS⊂V h(S)

Conductance (Sparsity).

slide-11
SLIDE 11

Edge Expansion/Conductance.

Graph G = (V,E), Assume regular graph of degree d. Edge Expansion. h(S) =

|E(S,V−S)| d min(|S|,|V−S|), h(G) = minS⊂V h(S)

Conductance (Sparsity). φ(S) = n|E(S,V−S)|

d|S||V−S| , φ(G) = minS⊂V φ(S)

slide-12
SLIDE 12

Edge Expansion/Conductance.

Graph G = (V,E), Assume regular graph of degree d. Edge Expansion. h(S) =

|E(S,V−S)| d min(|S|,|V−S|), h(G) = minS⊂V h(S)

Conductance (Sparsity). φ(S) = n|E(S,V−S)|

d|S||V−S| , φ(G) = minS⊂V φ(S)

Note n ≥ max(|S|,|V|−|S|) ≥ n/2

slide-13
SLIDE 13

Edge Expansion/Conductance.

Graph G = (V,E), Assume regular graph of degree d. Edge Expansion. h(S) =

|E(S,V−S)| d min(|S|,|V−S|), h(G) = minS⊂V h(S)

Conductance (Sparsity). φ(S) = n|E(S,V−S)|

d|S||V−S| , φ(G) = minS⊂V φ(S)

Note n ≥ max(|S|,|V|−|S|) ≥ n/2 → h(G) ≤ φ(G) ≤ 2h(S)

slide-14
SLIDE 14

Spectra of the graph.

A: Adjacency Matrix Aij = 1 ⇔ (i,j) ∈ E

slide-15
SLIDE 15

Spectra of the graph.

A: Adjacency Matrix Aij = 1 ⇔ (i,j) ∈ E M = 1

d A, normalized adjacency matrix,

slide-16
SLIDE 16

Spectra of the graph.

A: Adjacency Matrix Aij = 1 ⇔ (i,j) ∈ E M = 1

d A, normalized adjacency matrix, M real, symmetric

slide-17
SLIDE 17

Spectra of the graph.

A: Adjacency Matrix Aij = 1 ⇔ (i,j) ∈ E M = 1

d A, normalized adjacency matrix, M real, symmetric

  • rthonormal eigenvectors: v1,...,vn with eigenvalues

λ1 ≥ λ2 ≥ ... ≥ λn

slide-18
SLIDE 18

Spectra of the graph.

A: Adjacency Matrix Aij = 1 ⇔ (i,j) ∈ E M = 1

d A, normalized adjacency matrix, M real, symmetric

  • rthonormal eigenvectors: v1,...,vn with eigenvalues

λ1 ≥ λ2 ≥ ... ≥ λn Claim: Any two eigenvectors with different eigenvalues are

  • rthogonal.
slide-19
SLIDE 19

Spectra of the graph.

A: Adjacency Matrix Aij = 1 ⇔ (i,j) ∈ E M = 1

d A, normalized adjacency matrix, M real, symmetric

  • rthonormal eigenvectors: v1,...,vn with eigenvalues

λ1 ≥ λ2 ≥ ... ≥ λn Claim: Any two eigenvectors with different eigenvalues are

  • rthogonal.

Proof: Eigenvectors: v,v′ with eigenvalues λ,λ ′.

slide-20
SLIDE 20

Spectra of the graph.

A: Adjacency Matrix Aij = 1 ⇔ (i,j) ∈ E M = 1

d A, normalized adjacency matrix, M real, symmetric

  • rthonormal eigenvectors: v1,...,vn with eigenvalues

λ1 ≥ λ2 ≥ ... ≥ λn Claim: Any two eigenvectors with different eigenvalues are

  • rthogonal.

Proof: Eigenvectors: v,v′ with eigenvalues λ,λ ′. vT Mv′ = vT (λ ′v′)

slide-21
SLIDE 21

Spectra of the graph.

A: Adjacency Matrix Aij = 1 ⇔ (i,j) ∈ E M = 1

d A, normalized adjacency matrix, M real, symmetric

  • rthonormal eigenvectors: v1,...,vn with eigenvalues

λ1 ≥ λ2 ≥ ... ≥ λn Claim: Any two eigenvectors with different eigenvalues are

  • rthogonal.

Proof: Eigenvectors: v,v′ with eigenvalues λ,λ ′. vT Mv′ = vT (λ ′v′) = λ ′vT v′

slide-22
SLIDE 22

Spectra of the graph.

A: Adjacency Matrix Aij = 1 ⇔ (i,j) ∈ E M = 1

d A, normalized adjacency matrix, M real, symmetric

  • rthonormal eigenvectors: v1,...,vn with eigenvalues

λ1 ≥ λ2 ≥ ... ≥ λn Claim: Any two eigenvectors with different eigenvalues are

  • rthogonal.

Proof: Eigenvectors: v,v′ with eigenvalues λ,λ ′. vT Mv′ = vT (λ ′v′) = λ ′vT v′ vT Mv′ = λvT v′

slide-23
SLIDE 23

Spectra of the graph.

A: Adjacency Matrix Aij = 1 ⇔ (i,j) ∈ E M = 1

d A, normalized adjacency matrix, M real, symmetric

  • rthonormal eigenvectors: v1,...,vn with eigenvalues

λ1 ≥ λ2 ≥ ... ≥ λn Claim: Any two eigenvectors with different eigenvalues are

  • rthogonal.

Proof: Eigenvectors: v,v′ with eigenvalues λ,λ ′. vT Mv′ = vT (λ ′v′) = λ ′vT v′ vT Mv′ = λvT v′ = λvT v.

slide-24
SLIDE 24

Action of M.

v - assigns values to vertices.

slide-25
SLIDE 25

Action of M.

v - assigns values to vertices. (Mv)i = 1

d ∑j∼i vj.

slide-26
SLIDE 26

Action of M.

v - assigns values to vertices. (Mv)i = 1

d ∑j∼i vj.

Action of M: taking the average of your neighbours.

slide-27
SLIDE 27

Action of M.

v - assigns values to vertices. (Mv)i = 1

d ∑j∼i vj.

Action of M: taking the average of your neighbours. (Direct) result from the action of M,

slide-28
SLIDE 28

Action of M.

v - assigns values to vertices. (Mv)i = 1

d ∑j∼i vj.

Action of M: taking the average of your neighbours. (Direct) result from the action of M, |λi| ≤ 1 ∀i

slide-29
SLIDE 29

Action of M.

v - assigns values to vertices. (Mv)i = 1

d ∑j∼i vj.

Action of M: taking the average of your neighbours. (Direct) result from the action of M, |λi| ≤ 1 ∀i v1 = 1.

slide-30
SLIDE 30

Action of M.

v - assigns values to vertices. (Mv)i = 1

d ∑j∼i vj.

Action of M: taking the average of your neighbours. (Direct) result from the action of M, |λi| ≤ 1 ∀i v1 = 1. λ1 = 1.

slide-31
SLIDE 31

Action of M.

v - assigns values to vertices. (Mv)i = 1

d ∑j∼i vj.

Action of M: taking the average of your neighbours. (Direct) result from the action of M, |λi| ≤ 1 ∀i v1 = 1. λ1 = 1. Claim: For a connected graph λ2 < 1.

slide-32
SLIDE 32

Action of M.

v - assigns values to vertices. (Mv)i = 1

d ∑j∼i vj.

Action of M: taking the average of your neighbours. (Direct) result from the action of M, |λi| ≤ 1 ∀i v1 = 1. λ1 = 1. Claim: For a connected graph λ2 < 1. Proof: Second Eigenvector: v ⊥ 1.

slide-33
SLIDE 33

Action of M.

v - assigns values to vertices. (Mv)i = 1

d ∑j∼i vj.

Action of M: taking the average of your neighbours. (Direct) result from the action of M, |λi| ≤ 1 ∀i v1 = 1. λ1 = 1. Claim: For a connected graph λ2 < 1. Proof: Second Eigenvector: v ⊥ 1. Max value x. Connected → path from x valued node to lower value.

slide-34
SLIDE 34

Action of M.

v - assigns values to vertices. (Mv)i = 1

d ∑j∼i vj.

Action of M: taking the average of your neighbours. (Direct) result from the action of M, |λi| ≤ 1 ∀i v1 = 1. λ1 = 1. Claim: For a connected graph λ2 < 1. Proof: Second Eigenvector: v ⊥ 1. Max value x. Connected → path from x valued node to lower value. → ∃ e = (i,j), vi = x, xj < x.

slide-35
SLIDE 35

Action of M.

v - assigns values to vertices. (Mv)i = 1

d ∑j∼i vj.

Action of M: taking the average of your neighbours. (Direct) result from the action of M, |λi| ≤ 1 ∀i v1 = 1. λ1 = 1. Claim: For a connected graph λ2 < 1. Proof: Second Eigenvector: v ⊥ 1. Max value x. Connected → path from x valued node to lower value. → ∃ e = (i,j), vi = x, xj < x. i x . . . j ≤ x

slide-36
SLIDE 36

Action of M.

v - assigns values to vertices. (Mv)i = 1

d ∑j∼i vj.

Action of M: taking the average of your neighbours. (Direct) result from the action of M, |λi| ≤ 1 ∀i v1 = 1. λ1 = 1. Claim: For a connected graph λ2 < 1. Proof: Second Eigenvector: v ⊥ 1. Max value x. Connected → path from x valued node to lower value. → ∃ e = (i,j), vi = x, xj < x. i x . . . j ≤ x (Mv)i ≤ 1

d (x +x ···+vj)

slide-37
SLIDE 37

Action of M.

v - assigns values to vertices. (Mv)i = 1

d ∑j∼i vj.

Action of M: taking the average of your neighbours. (Direct) result from the action of M, |λi| ≤ 1 ∀i v1 = 1. λ1 = 1. Claim: For a connected graph λ2 < 1. Proof: Second Eigenvector: v ⊥ 1. Max value x. Connected → path from x valued node to lower value. → ∃ e = (i,j), vi = x, xj < x. i x . . . j ≤ x (Mv)i ≤ 1

d (x +x ···+vj) < x.

slide-38
SLIDE 38

Action of M.

v - assigns values to vertices. (Mv)i = 1

d ∑j∼i vj.

Action of M: taking the average of your neighbours. (Direct) result from the action of M, |λi| ≤ 1 ∀i v1 = 1. λ1 = 1. Claim: For a connected graph λ2 < 1. Proof: Second Eigenvector: v ⊥ 1. Max value x. Connected → path from x valued node to lower value. → ∃ e = (i,j), vi = x, xj < x. i x . . . j ≤ x (Mv)i ≤ 1

d (x +x ···+vj) < x.

Therefore λ2 < 1.

slide-39
SLIDE 39

Action of M.

v - assigns values to vertices. (Mv)i = 1

d ∑j∼i vj.

Action of M: taking the average of your neighbours. (Direct) result from the action of M, |λi| ≤ 1 ∀i v1 = 1. λ1 = 1. Claim: For a connected graph λ2 < 1. Proof: Second Eigenvector: v ⊥ 1. Max value x. Connected → path from x valued node to lower value. → ∃ e = (i,j), vi = x, xj < x. i x . . . j ≤ x (Mv)i ≤ 1

d (x +x ···+vj) < x.

Therefore λ2 < 1.

slide-40
SLIDE 40

Action of M.

v - assigns values to vertices. (Mv)i = 1

d ∑j∼i vj.

Action of M: taking the average of your neighbours. (Direct) result from the action of M, |λi| ≤ 1 ∀i v1 = 1. λ1 = 1. Claim: For a connected graph λ2 < 1. Proof: Second Eigenvector: v ⊥ 1. Max value x. Connected → path from x valued node to lower value. → ∃ e = (i,j), vi = x, xj < x. i x . . . j ≤ x (Mv)i ≤ 1

d (x +x ···+vj) < x.

Therefore λ2 < 1. Claim: Connected if λ2 < 1.

slide-41
SLIDE 41

Action of M.

v - assigns values to vertices. (Mv)i = 1

d ∑j∼i vj.

Action of M: taking the average of your neighbours. (Direct) result from the action of M, |λi| ≤ 1 ∀i v1 = 1. λ1 = 1. Claim: For a connected graph λ2 < 1. Proof: Second Eigenvector: v ⊥ 1. Max value x. Connected → path from x valued node to lower value. → ∃ e = (i,j), vi = x, xj < x. i x . . . j ≤ x (Mv)i ≤ 1

d (x +x ···+vj) < x.

Therefore λ2 < 1. Claim: Connected if λ2 < 1. Proof: By contradiction. Assign +1 to vertices in one component, −δ to rest.

slide-42
SLIDE 42

Action of M.

v - assigns values to vertices. (Mv)i = 1

d ∑j∼i vj.

Action of M: taking the average of your neighbours. (Direct) result from the action of M, |λi| ≤ 1 ∀i v1 = 1. λ1 = 1. Claim: For a connected graph λ2 < 1. Proof: Second Eigenvector: v ⊥ 1. Max value x. Connected → path from x valued node to lower value. → ∃ e = (i,j), vi = x, xj < x. i x . . . j ≤ x (Mv)i ≤ 1

d (x +x ···+vj) < x.

Therefore λ2 < 1. Claim: Connected if λ2 < 1. Proof: By contradiction. Assign +1 to vertices in one component, −δ to rest. xi = (Mxi)

slide-43
SLIDE 43

Action of M.

v - assigns values to vertices. (Mv)i = 1

d ∑j∼i vj.

Action of M: taking the average of your neighbours. (Direct) result from the action of M, |λi| ≤ 1 ∀i v1 = 1. λ1 = 1. Claim: For a connected graph λ2 < 1. Proof: Second Eigenvector: v ⊥ 1. Max value x. Connected → path from x valued node to lower value. → ∃ e = (i,j), vi = x, xj < x. i x . . . j ≤ x (Mv)i ≤ 1

d (x +x ···+vj) < x.

Therefore λ2 < 1. Claim: Connected if λ2 < 1. Proof: By contradiction. Assign +1 to vertices in one component, −δ to rest. xi = (Mxi) = ⇒ eigenvector with λ = 1.

slide-44
SLIDE 44

Action of M.

v - assigns values to vertices. (Mv)i = 1

d ∑j∼i vj.

Action of M: taking the average of your neighbours. (Direct) result from the action of M, |λi| ≤ 1 ∀i v1 = 1. λ1 = 1. Claim: For a connected graph λ2 < 1. Proof: Second Eigenvector: v ⊥ 1. Max value x. Connected → path from x valued node to lower value. → ∃ e = (i,j), vi = x, xj < x. i x . . . j ≤ x (Mv)i ≤ 1

d (x +x ···+vj) < x.

Therefore λ2 < 1. Claim: Connected if λ2 < 1. Proof: By contradiction. Assign +1 to vertices in one component, −δ to rest. xi = (Mxi) = ⇒ eigenvector with λ = 1. Choose δ to make ∑i xi = 0,

slide-45
SLIDE 45

Action of M.

v - assigns values to vertices. (Mv)i = 1

d ∑j∼i vj.

Action of M: taking the average of your neighbours. (Direct) result from the action of M, |λi| ≤ 1 ∀i v1 = 1. λ1 = 1. Claim: For a connected graph λ2 < 1. Proof: Second Eigenvector: v ⊥ 1. Max value x. Connected → path from x valued node to lower value. → ∃ e = (i,j), vi = x, xj < x. i x . . . j ≤ x (Mv)i ≤ 1

d (x +x ···+vj) < x.

Therefore λ2 < 1. Claim: Connected if λ2 < 1. Proof: By contradiction. Assign +1 to vertices in one component, −δ to rest. xi = (Mxi) = ⇒ eigenvector with λ = 1. Choose δ to make ∑i xi = 0, i.e., x ⊥ 1.

slide-46
SLIDE 46

Spectral Gap and the connectivity of graph.

Spectral gap: µ = λ1 −λ2 = 1−λ2.

slide-47
SLIDE 47

Spectral Gap and the connectivity of graph.

Spectral gap: µ = λ1 −λ2 = 1−λ2. Recall: h(G) = minS,|S|≤|V|/2

|E(S,V−S)| |S|

slide-48
SLIDE 48

Spectral Gap and the connectivity of graph.

Spectral gap: µ = λ1 −λ2 = 1−λ2. Recall: h(G) = minS,|S|≤|V|/2

|E(S,V−S)| |S|

1−λ2 = 0 ⇔ λ2 = 1 ⇔ G disconnected ⇔ h(G) = 0

slide-49
SLIDE 49

Spectral Gap and the connectivity of graph.

Spectral gap: µ = λ1 −λ2 = 1−λ2. Recall: h(G) = minS,|S|≤|V|/2

|E(S,V−S)| |S|

1−λ2 = 0 ⇔ λ2 = 1 ⇔ G disconnected ⇔ h(G) = 0 In general, small spectral gap 1−λ2 suggests ”poorly connected” graph

slide-50
SLIDE 50

Spectral Gap and the connectivity of graph.

Spectral gap: µ = λ1 −λ2 = 1−λ2. Recall: h(G) = minS,|S|≤|V|/2

|E(S,V−S)| |S|

1−λ2 = 0 ⇔ λ2 = 1 ⇔ G disconnected ⇔ h(G) = 0 In general, small spectral gap 1−λ2 suggests ”poorly connected” graph Formally

slide-51
SLIDE 51

Spectral Gap and the connectivity of graph.

Spectral gap: µ = λ1 −λ2 = 1−λ2. Recall: h(G) = minS,|S|≤|V|/2

|E(S,V−S)| |S|

1−λ2 = 0 ⇔ λ2 = 1 ⇔ G disconnected ⇔ h(G) = 0 In general, small spectral gap 1−λ2 suggests ”poorly connected” graph Formally Cheeger’s Inequality

slide-52
SLIDE 52

Spectral Gap and the connectivity of graph.

Spectral gap: µ = λ1 −λ2 = 1−λ2. Recall: h(G) = minS,|S|≤|V|/2

|E(S,V−S)| |S|

1−λ2 = 0 ⇔ λ2 = 1 ⇔ G disconnected ⇔ h(G) = 0 In general, small spectral gap 1−λ2 suggests ”poorly connected” graph Formally Cheeger’s Inequality 1−λ2 2

slide-53
SLIDE 53

Spectral Gap and the connectivity of graph.

Spectral gap: µ = λ1 −λ2 = 1−λ2. Recall: h(G) = minS,|S|≤|V|/2

|E(S,V−S)| |S|

1−λ2 = 0 ⇔ λ2 = 1 ⇔ G disconnected ⇔ h(G) = 0 In general, small spectral gap 1−λ2 suggests ”poorly connected” graph Formally Cheeger’s Inequality 1−λ2 2 ≤ h(G)

slide-54
SLIDE 54

Spectral Gap and the connectivity of graph.

Spectral gap: µ = λ1 −λ2 = 1−λ2. Recall: h(G) = minS,|S|≤|V|/2

|E(S,V−S)| |S|

1−λ2 = 0 ⇔ λ2 = 1 ⇔ G disconnected ⇔ h(G) = 0 In general, small spectral gap 1−λ2 suggests ”poorly connected” graph Formally Cheeger’s Inequality 1−λ2 2 ≤ h(G) ≤

  • 2(1−λ2)
slide-55
SLIDE 55

Spectral Gap and Conductance.

We will show 1−λ2 as a continuous relaxation of φ(G).

slide-56
SLIDE 56

Spectral Gap and Conductance.

We will show 1−λ2 as a continuous relaxation of φ(G). φ(G) = minS∈V n|E(S,V −S)| d|S||V −S|

slide-57
SLIDE 57

Spectral Gap and Conductance.

We will show 1−λ2 as a continuous relaxation of φ(G). φ(G) = minS∈V n|E(S,V −S)| d|S||V −S| Let x be the characteritic vector of set S

slide-58
SLIDE 58

Spectral Gap and Conductance.

We will show 1−λ2 as a continuous relaxation of φ(G). φ(G) = minS∈V n|E(S,V −S)| d|S||V −S| Let x be the characteritic vector of set S xi =

  • 1

if i ∈ S if i ∈ S

slide-59
SLIDE 59

Spectral Gap and Conductance.

We will show 1−λ2 as a continuous relaxation of φ(G). φ(G) = minS∈V n|E(S,V −S)| d|S||V −S| Let x be the characteritic vector of set S xi =

  • 1

if i ∈ S if i ∈ S |E(S,V −S)| = 1 2 ∑

i,j

Aij|xi −xj| = d 2 ∑

i,j

Mij(xi −xj)2

slide-60
SLIDE 60

Spectral Gap and Conductance.

We will show 1−λ2 as a continuous relaxation of φ(G). φ(G) = minS∈V n|E(S,V −S)| d|S||V −S| Let x be the characteritic vector of set S xi =

  • 1

if i ∈ S if i ∈ S |E(S,V −S)| = 1 2 ∑

i,j

Aij|xi −xj| = d 2 ∑

i,j

Mij(xi −xj)2 |S||V −S| = 1 2 ∑

i,j

|xi −xj| = 1 2 ∑

i,j

(xi −xj)2

slide-61
SLIDE 61

Spectral Gap and Conductance.

We will show 1−λ2 as a continuous relaxation of φ(G). φ(G) = minS∈V n|E(S,V −S)| d|S||V −S| Let x be the characteritic vector of set S xi =

  • 1

if i ∈ S if i ∈ S |E(S,V −S)| = 1 2 ∑

i,j

Aij|xi −xj| = d 2 ∑

i,j

Mij(xi −xj)2 |S||V −S| = 1 2 ∑

i,j

|xi −xj| = 1 2 ∑

i,j

(xi −xj)2 φ(G) = minx∈{0,1}V −{0,1} n∑i,j Mij(xi −xj)2 ∑i,j(xi −xj)2

slide-62
SLIDE 62

Recall Rayleigh Quotient: λ2 = maxx∈RV −{0},x⊥1

xT Mx xT x

slide-63
SLIDE 63

Recall Rayleigh Quotient: λ2 = maxx∈RV −{0},x⊥1

xT Mx xT x

1−λ2 = minx∈RV −{0},x⊥1 2(xT x −xT Mx) 2xT x

slide-64
SLIDE 64

Recall Rayleigh Quotient: λ2 = maxx∈RV −{0},x⊥1

xT Mx xT x

1−λ2 = minx∈RV −{0},x⊥1 2(xT x −xT Mx) 2xT x Claim: 2xT x = 1

n ∑i,j(xi −xj)2

slide-65
SLIDE 65

Recall Rayleigh Quotient: λ2 = maxx∈RV −{0},x⊥1

xT Mx xT x

1−λ2 = minx∈RV −{0},x⊥1 2(xT x −xT Mx) 2xT x Claim: 2xT x = 1

n ∑i,j(xi −xj)2

Proof:

i,j

(xi −xj)2 = ∑

i,j

x2

i +x2 j −2xixj

= 2n∑

i

x2

i −2(∑ i

xi)2 = 2n∑

i

x2

i = 2nxT x

We used x ⊥ 1 ⇒ ∑i xi = 0

slide-66
SLIDE 66

Recall Rayleigh Quotient: λ2 = maxx∈RV −{0},x⊥1

xT Mx xT x

1−λ2 = minx∈RV −{0},x⊥1 2(xT x −xT Mx) 2xT x

slide-67
SLIDE 67

Recall Rayleigh Quotient: λ2 = maxx∈RV −{0},x⊥1

xT Mx xT x

1−λ2 = minx∈RV −{0},x⊥1 2(xT x −xT Mx) 2xT x Claim: 2(xT x −xT Mx) = ∑i,j Mij(xi −xj)2

slide-68
SLIDE 68

Recall Rayleigh Quotient: λ2 = maxx∈RV −{0},x⊥1

xT Mx xT x

1−λ2 = minx∈RV −{0},x⊥1 2(xT x −xT Mx) 2xT x Claim: 2(xT x −xT Mx) = ∑i,j Mij(xi −xj)2 Proof:

i,j

Mij(xi −xj)2 = ∑

i,j

Mij(x2

i +x2 j )−2∑ i,j

Mijxixj = ∑

i ∑ j∼i

1 d (x2

i +x2 j )−2xT Mx

= 2 ∑

(i,j)∈E

1 d (x2

i +x2 j )−2xT Mx

= 2∑

i

x2

i −2xT Mx = 2xT x −2xT Mx

slide-69
SLIDE 69

Combining the two claims, we get

slide-70
SLIDE 70

Combining the two claims, we get 1−λ2 = minx∈RV −{0},x⊥1 ∑i,j Mij(xi −xj)2

1 n ∑i,j(xi −xj)2

= minx∈RV −Span{1} ∑i,j Mij(xi −xj)2

1 n ∑i,j(xi −xj)2

slide-71
SLIDE 71

Combining the two claims, we get 1−λ2 = minx∈RV −{0},x⊥1 ∑i,j Mij(xi −xj)2

1 n ∑i,j(xi −xj)2

= minx∈RV −Span{1} ∑i,j Mij(xi −xj)2

1 n ∑i,j(xi −xj)2

Recall φ(G) = minx∈{0,1}V −{0,1} n∑i,j Mij(xi −xj)2 ∑i,j(xi −xj)2

slide-72
SLIDE 72

Combining the two claims, we get 1−λ2 = minx∈RV −{0},x⊥1 ∑i,j Mij(xi −xj)2

1 n ∑i,j(xi −xj)2

= minx∈RV −Span{1} ∑i,j Mij(xi −xj)2

1 n ∑i,j(xi −xj)2

Recall φ(G) = minx∈{0,1}V −{0,1} n∑i,j Mij(xi −xj)2 ∑i,j(xi −xj)2 We have 1−λ2 as a continuous relaxation of φ(G), thus

slide-73
SLIDE 73

Combining the two claims, we get 1−λ2 = minx∈RV −{0},x⊥1 ∑i,j Mij(xi −xj)2

1 n ∑i,j(xi −xj)2

= minx∈RV −Span{1} ∑i,j Mij(xi −xj)2

1 n ∑i,j(xi −xj)2

Recall φ(G) = minx∈{0,1}V −{0,1} n∑i,j Mij(xi −xj)2 ∑i,j(xi −xj)2 We have 1−λ2 as a continuous relaxation of φ(G), thus 1−λ2 ≤ φ(G)

slide-74
SLIDE 74

Combining the two claims, we get 1−λ2 = minx∈RV −{0},x⊥1 ∑i,j Mij(xi −xj)2

1 n ∑i,j(xi −xj)2

= minx∈RV −Span{1} ∑i,j Mij(xi −xj)2

1 n ∑i,j(xi −xj)2

Recall φ(G) = minx∈{0,1}V −{0,1} n∑i,j Mij(xi −xj)2 ∑i,j(xi −xj)2 We have 1−λ2 as a continuous relaxation of φ(G), thus 1−λ2 ≤ φ(G) ≤ 2h(G)

slide-75
SLIDE 75

Combining the two claims, we get 1−λ2 = minx∈RV −{0},x⊥1 ∑i,j Mij(xi −xj)2

1 n ∑i,j(xi −xj)2

= minx∈RV −Span{1} ∑i,j Mij(xi −xj)2

1 n ∑i,j(xi −xj)2

Recall φ(G) = minx∈{0,1}V −{0,1} n∑i,j Mij(xi −xj)2 ∑i,j(xi −xj)2 We have 1−λ2 as a continuous relaxation of φ(G), thus 1−λ2 ≤ φ(G) ≤ 2h(G) Hooray!! We get the easy part of Cheeger 1−λ2

2

≤ h(G)

slide-76
SLIDE 76

Cheeger Hard Part.

Now let’s get to the hard part of Cheeger h(G) ≤

  • 2(1−λ2).
slide-77
SLIDE 77

Cheeger Hard Part.

Now let’s get to the hard part of Cheeger h(G) ≤

  • 2(1−λ2).

Idea: We have 1−λ2 as a continuous relaxation of φ(G)

slide-78
SLIDE 78

Cheeger Hard Part.

Now let’s get to the hard part of Cheeger h(G) ≤

  • 2(1−λ2).

Idea: We have 1−λ2 as a continuous relaxation of φ(G) Take the 2nd eigenvector x = argminx∈RV −Span{1}

∑i,j Mij(xi−xj)2

1 n ∑i,j(xi−xj)2

slide-79
SLIDE 79

Cheeger Hard Part.

Now let’s get to the hard part of Cheeger h(G) ≤

  • 2(1−λ2).

Idea: We have 1−λ2 as a continuous relaxation of φ(G) Take the 2nd eigenvector x = argminx∈RV −Span{1}

∑i,j Mij(xi−xj)2

1 n ∑i,j(xi−xj)2

Consider x as an embedding of the vertices to the real line.

slide-80
SLIDE 80

Cheeger Hard Part.

Now let’s get to the hard part of Cheeger h(G) ≤

  • 2(1−λ2).

Idea: We have 1−λ2 as a continuous relaxation of φ(G) Take the 2nd eigenvector x = argminx∈RV −Span{1}

∑i,j Mij(xi−xj)2

1 n ∑i,j(xi−xj)2

Consider x as an embedding of the vertices to the real line. Round x to get a x ∈ {0,1}V

slide-81
SLIDE 81

Cheeger Hard Part.

Now let’s get to the hard part of Cheeger h(G) ≤

  • 2(1−λ2).

Idea: We have 1−λ2 as a continuous relaxation of φ(G) Take the 2nd eigenvector x = argminx∈RV −Span{1}

∑i,j Mij(xi−xj)2

1 n ∑i,j(xi−xj)2

Consider x as an embedding of the vertices to the real line. Round x to get a x ∈ {0,1}V Rounding: Take a threshold t,

  • xi ≥ t

→ xi = 1 xi < t → xi = 0

slide-82
SLIDE 82

Cheeger Hard Part.

Now let’s get to the hard part of Cheeger h(G) ≤

  • 2(1−λ2).

Idea: We have 1−λ2 as a continuous relaxation of φ(G) Take the 2nd eigenvector x = argminx∈RV −Span{1}

∑i,j Mij(xi−xj)2

1 n ∑i,j(xi−xj)2

Consider x as an embedding of the vertices to the real line. Round x to get a x ∈ {0,1}V Rounding: Take a threshold t,

  • xi ≥ t

→ xi = 1 xi < t → xi = 0 What will be a good t?

slide-83
SLIDE 83

Cheeger Hard Part.

Now let’s get to the hard part of Cheeger h(G) ≤

  • 2(1−λ2).

Idea: We have 1−λ2 as a continuous relaxation of φ(G) Take the 2nd eigenvector x = argminx∈RV −Span{1}

∑i,j Mij(xi−xj)2

1 n ∑i,j(xi−xj)2

Consider x as an embedding of the vertices to the real line. Round x to get a x ∈ {0,1}V Rounding: Take a threshold t,

  • xi ≥ t

→ xi = 1 xi < t → xi = 0 What will be a good t? We don’t know. Try all possible thresholds (n −1 possibilities), and hope there is a t leading to a good cut!

slide-84
SLIDE 84

Sweeping Cut Algorithm

Input: G = (V,E), x ∈ RV,x ⊥ 1

slide-85
SLIDE 85

Sweeping Cut Algorithm

Input: G = (V,E), x ∈ RV,x ⊥ 1 Sort the vertices in non-decreasing order in terms of their values in x WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn

slide-86
SLIDE 86

Sweeping Cut Algorithm

Input: G = (V,E), x ∈ RV,x ⊥ 1 Sort the vertices in non-decreasing order in terms of their values in x WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn Let Si = {1,...,i} i = 1,...,n −1

slide-87
SLIDE 87

Sweeping Cut Algorithm

Input: G = (V,E), x ∈ RV,x ⊥ 1 Sort the vertices in non-decreasing order in terms of their values in x WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn Let Si = {1,...,i} i = 1,...,n −1 Return S = argminSi h(Si)

slide-88
SLIDE 88

Sweeping Cut Algorithm

Input: G = (V,E), x ∈ RV,x ⊥ 1 Sort the vertices in non-decreasing order in terms of their values in x WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn Let Si = {1,...,i} i = 1,...,n −1 Return S = argminSi h(Si) Main Lemma: G = (V,E), d-regular x ∈ RV,x ⊥ 1,δ = ∑i,j Mij(xi−xj)2

1 n ∑i,j(xi−xj)2

If S is the ouput of the sweeping cut algorithm, then h(S) ≤ √ 2δ

slide-89
SLIDE 89

Sweeping Cut Algorithm

Input: G = (V,E), x ∈ RV,x ⊥ 1 Sort the vertices in non-decreasing order in terms of their values in x WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn Let Si = {1,...,i} i = 1,...,n −1 Return S = argminSi h(Si) Main Lemma: G = (V,E), d-regular x ∈ RV,x ⊥ 1,δ = ∑i,j Mij(xi−xj)2

1 n ∑i,j(xi−xj)2

If S is the ouput of the sweeping cut algorithm, then h(S) ≤ √ 2δ Note: Applying the Main Lemma with the 2nd eigenvector v2, we have δ = 1−λ2, and h(G) ≤ h(S) ≤

  • 2(1−λ2). Done!
slide-90
SLIDE 90

Proof of Main Lemma

WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn

slide-91
SLIDE 91

Proof of Main Lemma

WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn Want to show ∃i s.t. h(Si) =

1 d |E(S,V −S)|

min(|S|,|V −S|) ≤ √ 2δ

slide-92
SLIDE 92

Proof of Main Lemma

WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn Want to show ∃i s.t. h(Si) =

1 d |E(S,V −S)|

min(|S|,|V −S|) ≤ √ 2δ Probabilistic Argument: Construct a distribution D over {S1,...,Sn−1} such that ES∼D[ 1

d |E(S,V −S)|]

ES∼D[min(|S|,|V −S|)] ≤ √ 2δ

slide-93
SLIDE 93

Proof of Main Lemma

WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn Want to show ∃i s.t. h(Si) =

1 d |E(S,V −S)|

min(|S|,|V −S|) ≤ √ 2δ Probabilistic Argument: Construct a distribution D over {S1,...,Sn−1} such that ES∼D[ 1

d |E(S,V −S)|]

ES∼D[min(|S|,|V −S|)] ≤ √ 2δ → ES∼D[ 1

d |E(S,V −S)|−

√ 2δmin(|S|,|V −S|)] ≤ 0

slide-94
SLIDE 94

Proof of Main Lemma

WLOG V = {1,...,n} x1 ≤ x2 ≤ ... ≤ xn Want to show ∃i s.t. h(Si) =

1 d |E(S,V −S)|

min(|S|,|V −S|) ≤ √ 2δ Probabilistic Argument: Construct a distribution D over {S1,...,Sn−1} such that ES∼D[ 1

d |E(S,V −S)|]

ES∼D[min(|S|,|V −S|)] ≤ √ 2δ → ES∼D[ 1

d |E(S,V −S)|−

√ 2δmin(|S|,|V −S|)] ≤ 0 ∃S

1 d |E(S,V −S)|−

√ 2δmin(|S|,|V −S|) ≤ 0

slide-95
SLIDE 95

The distribution D

WLOG, shift and scale so that x⌊ n

2 ⌋ = 0, and x2

1 +x2 n = 1

slide-96
SLIDE 96

The distribution D

WLOG, shift and scale so that x⌊ n

2 ⌋ = 0, and x2

1 +x2 n = 1

Take t from the range [x1,xn] with density function f(t) = 2|t|.

slide-97
SLIDE 97

The distribution D

WLOG, shift and scale so that x⌊ n

2 ⌋ = 0, and x2

1 +x2 n = 1

Take t from the range [x1,xn] with density function f(t) = 2|t|. Check:

xn

x1 f(t)dt = x1 −2tdt +

xn

0 2tdt = x2 1 +x2 n = 1

slide-98
SLIDE 98

The distribution D

WLOG, shift and scale so that x⌊ n

2 ⌋ = 0, and x2

1 +x2 n = 1

Take t from the range [x1,xn] with density function f(t) = 2|t|. Check:

xn

x1 f(t)dt = x1 −2tdt +

xn

0 2tdt = x2 1 +x2 n = 1

S = {i : xi ≤ t}

slide-99
SLIDE 99

The distribution D

WLOG, shift and scale so that x⌊ n

2 ⌋ = 0, and x2

1 +x2 n = 1

Take t from the range [x1,xn] with density function f(t) = 2|t|. Check:

xn

x1 f(t)dt = x1 −2tdt +

xn

0 2tdt = x2 1 +x2 n = 1

S = {i : xi ≤ t} Take D as the distribution over S1,...,Sn−1 resulted from the above procedure.

slide-100
SLIDE 100

Goal:

ES∼D[ 1

d |E(S,V−S)|]

ES∼D[min(|S|,|V−S|)] ≤

√ 2δ

slide-101
SLIDE 101

Goal:

ES∼D[ 1

d |E(S,V−S)|]

ES∼D[min(|S|,|V−S|)] ≤

√ 2δ Denominator:

slide-102
SLIDE 102

Goal:

ES∼D[ 1

d |E(S,V−S)|]

ES∼D[min(|S|,|V−S|)] ≤

√ 2δ Denominator: Let Ti = i is in the smaller set of S,V −S

slide-103
SLIDE 103

Goal:

ES∼D[ 1

d |E(S,V−S)|]

ES∼D[min(|S|,|V−S|)] ≤

√ 2δ Denominator: Let Ti = i is in the smaller set of S,V −S Can check ES∼D[Ti] = Pr[Ti] = x2

i

ES∼D[min(|S|,|V −S|)] = ES∼D[∑

i

Ti] = ∑

i

ES∼D[Ti] = ∑

i

x2

i

slide-104
SLIDE 104

Goal:

ES∼D[ 1

d |E(S,V−S)|]

ES∼D[min(|S|,|V−S|)] ≤

√ 2δ

slide-105
SLIDE 105

Goal:

ES∼D[ 1

d |E(S,V−S)|]

ES∼D[min(|S|,|V−S|)] ≤

√ 2δ Numerator:

slide-106
SLIDE 106

Goal:

ES∼D[ 1

d |E(S,V−S)|]

ES∼D[min(|S|,|V−S|)] ≤

√ 2δ Numerator: Let Ti,j = i,j is cut by (S,V −S)

slide-107
SLIDE 107

Goal:

ES∼D[ 1

d |E(S,V−S)|]

ES∼D[min(|S|,|V−S|)] ≤

√ 2δ Numerator: Let Ti,j = i,j is cut by (S,V −S)

  • xi,xj same sign:

Pr[Ti,j] = |x2

i −x2 j |

xi,xj different sign: Pr[Ti,j] = x2

i +x2 j

slide-108
SLIDE 108

Goal:

ES∼D[ 1

d |E(S,V−S)|]

ES∼D[min(|S|,|V−S|)] ≤

√ 2δ Numerator: Let Ti,j = i,j is cut by (S,V −S)

  • xi,xj same sign:

Pr[Ti,j] = |x2

i −x2 j |

xi,xj different sign: Pr[Ti,j] = x2

i +x2 j

A common upper bound: E[Ti,j] = Pr[Ti,j] ≤ |xi −xj|(|xi|+|xj|)

slide-109
SLIDE 109

Goal:

ES∼D[ 1

d |E(S,V−S)|]

ES∼D[min(|S|,|V−S|)] ≤

√ 2δ Numerator: Let Ti,j = i,j is cut by (S,V −S)

  • xi,xj same sign:

Pr[Ti,j] = |x2

i −x2 j |

xi,xj different sign: Pr[Ti,j] = x2

i +x2 j

A common upper bound: E[Ti,j] = Pr[Ti,j] ≤ |xi −xj|(|xi|+|xj|) ES∼D[ 1 d |E(S,V −S)|] = 1 2 ∑

i,j

MijE[Ti,j] ≤ 1 2 ∑

i,j

Mij|xi −xj|(|xi|+|xj|)

slide-110
SLIDE 110

Cauchy-Schwarz Inequality

|a·b| ≤ ab, as a·b = abcos(a,b)

slide-111
SLIDE 111

Cauchy-Schwarz Inequality

|a·b| ≤ ab, as a·b = abcos(a,b) Applying with a,b ∈ Rn2 with aij = Mij|xi −xj|,bij = Mij|xi|+|xj|

slide-112
SLIDE 112

Cauchy-Schwarz Inequality

|a·b| ≤ ab, as a·b = abcos(a,b) Applying with a,b ∈ Rn2 with aij = Mij|xi −xj|,bij = Mij|xi|+|xj| Numerator: ES∼D[ 1 d |E(S,V −S)|] = 1 2 ∑

i,j

MijE[Ti,j] ≤ 1 2 ∑

i,j

Mij|xi −xj|(|xi|+|xj|) = 1 2a·b ≤ 1 2ab

slide-113
SLIDE 113

Recall δ = ∑i,j Mij(xi−xj)2

1 n ∑i,j(xi−xj)2 ,aij = Mij|xi −xj|,bij = Mij|xi|+|xj|

slide-114
SLIDE 114

Recall δ = ∑i,j Mij(xi−xj)2

1 n ∑i,j(xi−xj)2 ,aij = Mij|xi −xj|,bij = Mij|xi|+|xj|

a2 = ∑

i,j

Mij(xi −xj)2 = δ n ∑

i,j

(xi −xj)2 = δ n ∑

i,j

(x2

i +x2 j )−∑ i,j

2xixj = δ n ∑

i,j

(x2

i +x2 j )−2(∑ i

xi)2 ≤ δ n ∑

i,j

(x2

i +x2 j ) = 2δ ∑ i

x2

i

slide-115
SLIDE 115

Recall δ = ∑i,j Mij(xi−xj)2

1 n ∑i,j(xi−xj)2 ,aij = Mij|xi −xj|,bij = Mij|xi|+|xj|

a2 = ∑

i,j

Mij(xi −xj)2 = δ n ∑

i,j

(xi −xj)2 = δ n ∑

i,j

(x2

i +x2 j )−∑ i,j

2xixj = δ n ∑

i,j

(x2

i +x2 j )−2(∑ i

xi)2 ≤ δ n ∑

i,j

(x2

i +x2 j ) = 2δ ∑ i

x2

i

b2 = ∑

i,j

Mij(|xi|+|xj|)2 ≤ ∑

i,j

Mij(2x2

i +2x2 j )

= 4∑

i

x2

i

slide-116
SLIDE 116

Goal:

ES∼D[ 1

d |E(S,V−S)|]

ES∼D[min(|S|,|V−S|)] ≤

√ 2δ

slide-117
SLIDE 117

Goal:

ES∼D[ 1

d |E(S,V−S)|]

ES∼D[min(|S|,|V−S|)] ≤

√ 2δ Numerator: ES∼D[ 1 d |E(S,V −S)|] =≤ 1 2ab ≤ 1 2

  • 2δ ∑

i

x2

i

  • 4∑

i

x2

i

= √ 2δ ∑

i

x2

i

Recall Denominator: ES∼D[min(|S|,|V −S|)] = ∑

i

x2

i

We get ES∼D[ 1

d |E(S,V −S)|]

ES∼D[min(|S|,|V −S|)] ≤ √ 2δ

slide-118
SLIDE 118

Goal:

ES∼D[ 1

d |E(S,V−S)|]

ES∼D[min(|S|,|V−S|)] ≤

√ 2δ Numerator: ES∼D[ 1 d |E(S,V −S)|] =≤ 1 2ab ≤ 1 2

  • 2δ ∑

i

x2

i

  • 4∑

i

x2

i

= √ 2δ ∑

i

x2

i

Recall Denominator: ES∼D[min(|S|,|V −S|)] = ∑

i

x2

i

We get ES∼D[ 1

d |E(S,V −S)|]

ES∼D[min(|S|,|V −S|)] ≤ √ 2δ Thus ∃Si such that h(Si) ≤ √ 2δ, which gives h(G) ≤

  • 2(1−λ)
slide-119
SLIDE 119

Cycle

Tight example for hard part of Cheeger?

slide-120
SLIDE 120

Cycle

Tight example for hard part of Cheeger?

µ 2

slide-121
SLIDE 121

Cycle

Tight example for hard part of Cheeger?

µ 2 = 1−λ2 2

slide-122
SLIDE 122

Cycle

Tight example for hard part of Cheeger?

µ 2 = 1−λ2 2

≤ h(G)

slide-123
SLIDE 123

Cycle

Tight example for hard part of Cheeger?

µ 2 = 1−λ2 2

≤ h(G) ≤

  • 2(1−λ2)
slide-124
SLIDE 124

Cycle

Tight example for hard part of Cheeger?

µ 2 = 1−λ2 2

≤ h(G) ≤

  • 2(1−λ2) =
slide-125
SLIDE 125

Cycle

Tight example for hard part of Cheeger?

µ 2 = 1−λ2 2

≤ h(G) ≤

  • 2(1−λ2) =

Will show other side of Cheeger is asymptotically tight. Cycle on n nodes.

slide-126
SLIDE 126

Cycle

Tight example for hard part of Cheeger?

µ 2 = 1−λ2 2

≤ h(G) ≤

  • 2(1−λ2) =

Will show other side of Cheeger is asymptotically tight. Cycle on n nodes. Edge expansion:Cut in half.

slide-127
SLIDE 127

Cycle

Tight example for hard part of Cheeger?

µ 2 = 1−λ2 2

≤ h(G) ≤

  • 2(1−λ2) =

Will show other side of Cheeger is asymptotically tight. Cycle on n nodes. Edge expansion:Cut in half. |S| = n

2, |E(S,S)| = 2

slide-128
SLIDE 128

Cycle

Tight example for hard part of Cheeger?

µ 2 = 1−λ2 2

≤ h(G) ≤

  • 2(1−λ2) =

Will show other side of Cheeger is asymptotically tight. Cycle on n nodes. Edge expansion:Cut in half. |S| = n

2, |E(S,S)| = 2

→ h(G) = 4

n.

slide-129
SLIDE 129

Cycle

Tight example for hard part of Cheeger?

µ 2 = 1−λ2 2

≤ h(G) ≤

  • 2(1−λ2) =

Will show other side of Cheeger is asymptotically tight. Cycle on n nodes. Edge expansion:Cut in half. |S| = n

2, |E(S,S)| = 2

→ h(G) = 4

n.

Show eigenvalue gap µ is O( 1

n2 ).

slide-130
SLIDE 130

Cycle

Tight example for hard part of Cheeger?

µ 2 = 1−λ2 2

≤ h(G) ≤

  • 2(1−λ2) =

Will show other side of Cheeger is asymptotically tight. Cycle on n nodes. Edge expansion:Cut in half. |S| = n

2, |E(S,S)| = 2

→ h(G) = 4

n.

Show eigenvalue gap µ is O( 1

n2 ).

Find x ⊥ 1 with Rayleigh quotient, xT Mx

xT x close to 1.

slide-131
SLIDE 131

Find x ⊥ 1 with Rayleigh quotient, xT Mx

xT x close to 1.

slide-132
SLIDE 132

Find x ⊥ 1 with Rayleigh quotient, xT Mx

xT x close to 1.

xi =

  • i −n/4

if i ≤ n/2 3n/4−i if i > n/2 ··· ···

x1 ≈ − n

4

xn ≈ − n

4

xn/2 ≈ n

4

slide-133
SLIDE 133

Find x ⊥ 1 with Rayleigh quotient, xT Mx

xT x close to 1.

xi =

  • i −n/4

if i ≤ n/2 3n/4−i if i > n/2 ··· ···

x1 ≈ − n

4

xn ≈ − n

4

xn/2 ≈ n

4

Hit with M. (Mx)i =      −n/4+1/2 if i = 1,n n/4−1 if i = n/2 xi

  • therwise
slide-134
SLIDE 134

Find x ⊥ 1 with Rayleigh quotient, xT Mx

xT x close to 1.

xi =

  • i −n/4

if i ≤ n/2 3n/4−i if i > n/2 ··· ···

x1 ≈ − n

4

xn ≈ − n

4

xn/2 ≈ n

4

Hit with M. (Mx)i =      −n/4+1/2 if i = 1,n n/4−1 if i = n/2 xi

  • therwise

→ xT Mx = xT x(1−O( 1

n2 ))

slide-135
SLIDE 135

Find x ⊥ 1 with Rayleigh quotient, xT Mx

xT x close to 1.

xi =

  • i −n/4

if i ≤ n/2 3n/4−i if i > n/2 ··· ···

x1 ≈ − n

4

xn ≈ − n

4

xn/2 ≈ n

4

Hit with M. (Mx)i =      −n/4+1/2 if i = 1,n n/4−1 if i = n/2 xi

  • therwise

→ xT Mx = xT x(1−O( 1

n2 ))

→ λ2 ≥ 1−O( 1

n2 )

slide-136
SLIDE 136

Find x ⊥ 1 with Rayleigh quotient, xT Mx

xT x close to 1.

xi =

  • i −n/4

if i ≤ n/2 3n/4−i if i > n/2 ··· ···

x1 ≈ − n

4

xn ≈ − n

4

xn/2 ≈ n

4

Hit with M. (Mx)i =      −n/4+1/2 if i = 1,n n/4−1 if i = n/2 xi

  • therwise

→ xT Mx = xT x(1−O( 1

n2 ))

→ λ2 ≥ 1−O( 1

n2 )

µ = λ1 −λ2 = O( 1

n2 )

slide-137
SLIDE 137

Find x ⊥ 1 with Rayleigh quotient, xT Mx

xT x close to 1.

xi =

  • i −n/4

if i ≤ n/2 3n/4−i if i > n/2 ··· ···

x1 ≈ − n

4

xn ≈ − n

4

xn/2 ≈ n

4

Hit with M. (Mx)i =      −n/4+1/2 if i = 1,n n/4−1 if i = n/2 xi

  • therwise

→ xT Mx = xT x(1−O( 1

n2 ))

→ λ2 ≥ 1−O( 1

n2 )

µ = λ1 −λ2 = O( 1

n2 )

h(G) = 4

n = Θ(

  • 2µ)
slide-138
SLIDE 138

Find x ⊥ 1 with Rayleigh quotient, xT Mx

xT x close to 1.

xi =

  • i −n/4

if i ≤ n/2 3n/4−i if i > n/2 ··· ···

x1 ≈ − n

4

xn ≈ − n

4

xn/2 ≈ n

4

Hit with M. (Mx)i =      −n/4+1/2 if i = 1,n n/4−1 if i = n/2 xi

  • therwise

→ xT Mx = xT x(1−O( 1

n2 ))

→ λ2 ≥ 1−O( 1

n2 )

µ = λ1 −λ2 = O( 1

n2 )

h(G) = 4

n = Θ(

  • 2µ)

Asymptotically tight example for upper bound for Cheeger h(G) ≤

  • 2(1−λ2) =
  • 2µ.
slide-139
SLIDE 139

Sum up.

1−λ2 as a relaxation of φ(G).

slide-140
SLIDE 140

Sum up.

1−λ2 as a relaxation of φ(G). Sweeping cut Algorithm

slide-141
SLIDE 141

Sum up.

1−λ2 as a relaxation of φ(G). Sweeping cut Algorithm Probabilistic argument to show there exists a good threshold cut

slide-142
SLIDE 142

Sum up.

1−λ2 as a relaxation of φ(G). Sweeping cut Algorithm Probabilistic argument to show there exists a good threshold cut Example: Cycle, Cheeger hard part is asymptotic tight .

slide-143
SLIDE 143

Satish will be back on Tuesday.

slide-144
SLIDE 144

Satish will be back on Tuesday.