T HE PERMUTATION MODEL G ( n , 2 d ) 1 , . . . , d iid uniform - - PowerPoint PPT Presentation

t he permutation model g n 2 d
SMART_READER_LITE
LIVE PREVIEW

T HE PERMUTATION MODEL G ( n , 2 d ) 1 , . . . , d iid uniform - - PowerPoint PPT Presentation

E IGENVALUES OF SPARSE RANDOM REGULAR GRAPHS Soumik Pal University of Washington Seattle Seminar on Stochastic Processes, 2012 G RAPHS AND ADJACENCY MATRICES 3 2 Undirected graphs on n labeled vertices. 4 1 Regular: degree d . 5 6


slide-1
SLIDE 1

EIGENVALUES OF SPARSE

RANDOM REGULAR GRAPHS

Soumik Pal University of Washington Seattle Seminar on Stochastic Processes, 2012

slide-2
SLIDE 2

GRAPHS AND ADJACENCY MATRICES

Undirected graphs on n labeled vertices. Regular: degree d. Adjacency matrix = n × n symmetric matrix. Sparse - d ≪ n.

1 2 3 4 5 6        1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1       

slide-3
SLIDE 3

MODELS OF RANDOM REGULAR GRAPHS

The permutation model: G(n, 2). π - random permutation on [n]. 2-regular graph:

1 2 4 3 8 5 10 6 7 9

slide-4
SLIDE 4

THE PERMUTATION MODEL G(n, 2d)

π1, . . . , πd iid uniform permutations. Superimpose.

slide-5
SLIDE 5

THE PERMUTATION MODEL G(n, 2d)

π1, . . . , πd iid uniform permutations. Superimpose. 1 2 3 4 5 π1 = π2 =

slide-6
SLIDE 6

THE PERMUTATION MODEL G(n, 2d)

π1, . . . , πd iid uniform permutations. Superimpose. 1 2 3 4 5 π1 = (1 3 2)(4 5) π2 =

slide-7
SLIDE 7

THE PERMUTATION MODEL G(n, 2d)

π1, . . . , πd iid uniform permutations. Superimpose. 1 2 3 4 5 π1 = (1 3 2)(4 5) π2 = (1 4 2)

slide-8
SLIDE 8

THE PERMUTATION MODEL G(n, 2d)

π1, . . . , πd iid uniform permutations. Superimpose. 1 2 3 4 5 π1 = (1 3 2)(4 5) π2 = (1 4 2) Multiple edges, loops OK.

slide-9
SLIDE 9

SPECTRAL GRAPH THEORY

Eigenvalues of the adjacency matrix. Why care? Test RMT universality: McKay ’81, Dumitriu-P . ’10, Tran-Vu-Wang’10 Ben Arous-Dang ’11, Erdos-Knowles-Yau ’11 Expansion properties: Broder-Shamir ’87, Friedman ’91, ’08 Quantum chaos: Smilansky ’10, Elon-Smilansky ’10 Simulations: Jacobson et al. ’99, Miller et al. ’08. and many more . . .

slide-10
SLIDE 10

RANDOM MATRIX THEORY

A Wigner matrix is a square random matrix with

slide-11
SLIDE 11

RANDOM MATRIX THEORY

A Wigner matrix is a square random matrix with upper triangular entries chosen independently;        −0.6 0.7 0.1 0.6 2.1 2.5 − 0.1 −2.2 1.1 − 0.6        A sample of a 4 × 4 Wigner matrix.

slide-12
SLIDE 12

RANDOM MATRIX THEORY

A Wigner matrix is a square random matrix with upper triangular entries chosen independently; symmetric.        −0.6 0.7 0.1 0.6 0.7 2.1 2.5 − 0.1 0.1 2.5 −2.2 1.1 0.6 − 0.1 1.1 − 0.6        A sample of a 4 × 4 Wigner matrix.

slide-13
SLIDE 13

RANDOM MATRIX THEORY

A Wigner matrix is a square random matrix with upper triangular entries chosen independently; symmetric. Minor=principal submatrix, also Wigner.        −0.6 0.7 0.1 0.6 0.7 2.1 2.5 − 0.1 0.1 2.5 −2.2 1.1 0.6 − 0.1 1.1 − 0.6        A sample of a 4 × 4 Wigner matrix.

slide-14
SLIDE 14

EIGENVALUE FLUCTUATIONS

W∞ - Wigner array. Wn - n × n minor. E-values {λn

i }.

Linear eigenvalue statistics tr f (Wn) :=

n

  • i=1

f λn

i

2√n

  • .

(Classical Theorem) If f is analytic lim

n→∞ [tr f (Wn) − E tr f (Wn)] = N

  • 0, σ2

f

  • .
slide-15
SLIDE 15

EIGENVALUE FLUCTUATIONS

W∞ - Wigner array. Wn - n × n minor. E-values {λn

i }.

Linear eigenvalue statistics tr f (Wn) :=

n

  • i=1

f λn

i

2√n

  • .

(Classical Theorem) If f is analytic lim

n→∞ [tr f (Wn) − E tr f (Wn)] = N

  • 0, σ2

f

  • .

Eigenvalue repulsion.

slide-16
SLIDE 16

JOINT CONVERGENCE ACROSS MINORS

Results by Borodin ’10. Choose 0 < t1 ≤ t2 ≤ . . . ≤ tk and f1, f2, . . . , fk. lim

n→∞

  • tr fi
  • W⌊nti⌋
  • − E tr fi (·) , i ∈ [k]
  • = Gaussian.

Mean zero. Covariance kernel?

slide-17
SLIDE 17

JOINT CONVERGENCE ACROSS MINORS

Results by Borodin ’10. Choose 0 < t1 ≤ t2 ≤ . . . ≤ tk and f1, f2, . . . , fk. lim

n→∞

  • tr fi
  • W⌊nti⌋
  • − E tr fi (·) , i ∈ [k]
  • = Gaussian.

Mean zero. Covariance kernel?

‘Limiting fluctuation of the Height Function is the Gaussian Free Field.’

slide-18
SLIDE 18

CHEBYSHEV POLYNOMIALS - FIRST KIND

Tn(x), n ≥ 0 - Poly of degree n. Orthogonal polynomials on [−1, 1]. Weight measure: Arc-Sine law. Fourier expansion: f =

n cnTn.

1.0 0.5 0.5 1.0 1.0 0.5 0.5 1.0

FIGURE: T1, T3, T8

slide-19
SLIDE 19

CHEBYSHEV POLYNOMIALS - FIRST KIND

Tn(x), n ≥ 0 - Poly of degree n. Orthogonal polynomials on [−1, 1]. Weight measure: Arc-Sine law. Fourier expansion: f =

n cnTn.

1.0 0.5 0.5 1.0 1.0 0.5 0.5 1.0

FIGURE: T1, T3, T8

(Borodin ’10) Fix s ≤ t and i, j ∈ N. lim

n→∞ Cov

  • tr Ti
  • W⌊nt⌋
  • , tr Tj
  • W⌊ns⌋
  • = δij

j 2 s t j/2 .

slide-20
SLIDE 20

Main Results - (with Toby Johnson ’12)

slide-21
SLIDE 21

CHINESE RESTAURANT PROCESS

(Dubins-Pitman) Randomly grows a permutation. Every point adds neighbor to left at rate one. New fixed points at rate one. σ = (1)

slide-22
SLIDE 22

CHINESE RESTAURANT PROCESS

(Dubins-Pitman) Randomly grows a permutation. Every point adds neighbor to left at rate one. New fixed points at rate one. σ = (1)(2)

slide-23
SLIDE 23

CHINESE RESTAURANT PROCESS

(Dubins-Pitman) Randomly grows a permutation. Every point adds neighbor to left at rate one. New fixed points at rate one. σ = (1)(2 3)

slide-24
SLIDE 24

CHINESE RESTAURANT PROCESS

(Dubins-Pitman) Randomly grows a permutation. Every point adds neighbor to left at rate one. New fixed points at rate one. σ = (1)(2 4 3)

slide-25
SLIDE 25

CHINESE RESTAURANT PROCESS

(Dubins-Pitman) Randomly grows a permutation. Every point adds neighbor to left at rate one. New fixed points at rate one. σ = (1)(2 4 3)(5)

slide-26
SLIDE 26

CHINESE RESTAURANT PROCESS

(Dubins-Pitman) Randomly grows a permutation. Every point adds neighbor to left at rate one. New fixed points at rate one. σ = (1)(2 4 3)(5)(6)

slide-27
SLIDE 27

CHINESE RESTAURANT PROCESS

(Dubins-Pitman) Randomly grows a permutation. Every point adds neighbor to left at rate one. New fixed points at rate one. σ = (1)(2 4 3)(5 7)(6)

slide-28
SLIDE 28

CHINESE RESTAURANT PROCESS

(Dubins-Pitman) Randomly grows a permutation. Every point adds neighbor to left at rate one. New fixed points at rate one. σ = (1)(2 4 3)(5 8 7)(6)

slide-29
SLIDE 29

CHINESE RESTAURANT PROCESS

(Dubins-Pitman) Randomly grows a permutation. Every point adds neighbor to left at rate one. New fixed points at rate one. σ = (1)(2 4 3 9)(5 8 7)(6)

slide-30
SLIDE 30

CHINESE RESTAURANT PROCESS

(Dubins-Pitman) Randomly grows a permutation. Every point adds neighbor to left at rate one. New fixed points at rate one. σ = (1)(2 4 3 9)(5 10 8 7)(6)

slide-31
SLIDE 31

CYCLES AND EIGENVALUES

Case of G(n, 2). Adjacency matrix - blocks of cycles. E-values of cycle of size k:

  • cos

2πj k

  • , j ∈ [k]
  • .
slide-32
SLIDE 32

CYCLES AND EIGENVALUES

Case of G(n, 2). Adjacency matrix - blocks of cycles. E-values of cycle of size k:

  • cos

2πj k

  • , j ∈ [k]
  • .

Under the CRP: Cycle of size k − → size (k + 1) at rate k. New cycle size 1 at rate 1. Stochastic process of eigenvalues. Kerov-Olshanki-Vershik ’93 Olshanki-Vershik ’96, Bourgade-Najnudel-Nikeghbali ’11

slide-33
SLIDE 33

THE CASE OF GENERAL G(n, 2d)

Simultaneous insertion in (π1, π2, . . . , πd) by CRP . Let Ti = Exp(i), i ∈ N, nt = max

  • m :

m

  • i=1

Ti ≤ t

  • .

G(t) := G(nt, 2d). No cyclic decomposition.

slide-34
SLIDE 34

GROWTH OF A CYCLE

(Johnson-P . ’12) Existing cycles grow in size. 1 2 3 4 5 π1 π1 π2 π1 π2 π1 = (1 2 3)(4 5) π2 = (1 5)(4 3)(2) 1 2 3 4 5 6 π1 π1 π1 π2 π1 π2 π1 = (1 2 6 3)(4 5) π2 = (1 5)(4 3)(2 6)

FIGURE: Vertex 6 is inserted between 2 and 3 in π1.

slide-35
SLIDE 35

BIRTH OF A CYCLE

1 2 3 4 5 π2 π1 π2 π1 π1 = (2 3 1)(4 5) π2 = (2 1 3 4 5) 1 2 3 4 5 6 π2 π3 π2 π1 π1 π2 π1 = (2 3 1 6)(4 5) π2 = (2 1 3 4 6 5)

FIGURE: A cycle forms “spontaneously”.

slide-36
SLIDE 36

CYCLE COUNTS

C(s)

k (t) = # k-cycles in G(s + t).

Non-Markovian process in t, with s fixed.

slide-37
SLIDE 37

CYCLE COUNTS

C(s)

k (t) = # k-cycles in G(s + t).

Non-Markovian process in t, with s fixed. (C(s)

k (t), k ∈ N, −∞ < t < ∞) converges as s → ∞.

Limiting process (Nk(t), k ∈ N, −∞ < t < ∞) is Markov. Running in stationarity.

slide-38
SLIDE 38

THE LIMITING PROCESS

(Johnson-P . ’12) In the limit: Existing k-cycles grows to (k + 1) at rate k. New k-cycles created at rate µ(k) ⊗ Leb. Here: µ(k) = 1 2 [a(d, k) − a(d, k − 1)] , k ∈ N, a(d, 0) := 0, where a(d, k) =

  • (2d − 1)k − 1 + 2d,

k even, (2d − 1)k + 1, k odd.

slide-39
SLIDE 39

MORE FORMALLY ...

(Johnson-P . ’12) Poisson point process χ on N × (−∞, ∞). Intensity µ ⊗ Leb. Yule process: Lf(k) = k (f(k + 1) − f(k)) , k ∈ N.

slide-40
SLIDE 40

MORE FORMALLY ...

(Johnson-P . ’12) Poisson point process χ on N × (−∞, ∞). Intensity µ ⊗ Leb. Yule process: Lf(k) = k (f(k + 1) − f(k)) , k ∈ N. For (k, y) ∈ χ, start indep (Xk,y(t), t ≥ 0). Yule process from k.

slide-41
SLIDE 41

MORE FORMALLY ...

(Johnson-P . ’12) Poisson point process χ on N × (−∞, ∞). Intensity µ ⊗ Leb. Yule process: Lf(k) = k (f(k + 1) − f(k)) , k ∈ N. For (k, y) ∈ χ, start indep (Xk,y(t), t ≥ 0). Yule process from k. Define Nk(t) :=

  • (j,y)∈χ∩{[k]×(−∞,t]}

1 {Xj,y(t − y) = k} .

slide-42
SLIDE 42

INVARIANT DISTRIBUTION

(C(s)

k (t), k ∈ N, t ∈ R) s

− → (Nk(t), k ∈ N, t ∈ R). Marginal distribution: (Nk(t), k ∈ N) ∼ ⊗Poi a(d, k) 2k

  • .
slide-43
SLIDE 43

INVARIANT DISTRIBUTION

(C(s)

k (t), k ∈ N, t ∈ R) s

− → (Nk(t), k ∈ N, t ∈ R). Marginal distribution: (Nk(t), k ∈ N) ∼ ⊗Poi a(d, k) 2k

  • .

Dumitriu-Johnson-P .-Paquette ’11 Bollobás ’80, Wormald ’81.

slide-44
SLIDE 44

EIGENVALUES AND CYCLES

slide-45
SLIDE 45

EIGENVALUES AND CYCLES

What do the eigenvalues count?

slide-46
SLIDE 46

EIGENVALUES AND CYCLES

What do the eigenvalues count? Cyclically non-backtracking walks (CNBW).

slide-47
SLIDE 47

EIGENVALUES AND CYCLES

What do the eigenvalues count? Cyclically non-backtracking walks (CNBW). backtracking

slide-48
SLIDE 48

EIGENVALUES AND CYCLES

What do the eigenvalues count? Cyclically non-backtracking walks (CNBW). backtracking

slide-49
SLIDE 49

EIGENVALUES AND CYCLES

What do the eigenvalues count? Cyclically non-backtracking walks (CNBW). backtracking

slide-50
SLIDE 50

EIGENVALUES AND CYCLES

What do the eigenvalues count? Cyclically non-backtracking walks (CNBW). backtracking cyclically non-backtracking

slide-51
SLIDE 51

EIGENVALUES AND CYCLES

What do the eigenvalues count? Cyclically non-backtracking walks (CNBW). backtracking cyclically non-backtracking

slide-52
SLIDE 52

EIGENVALUES AND CYCLES

What do the eigenvalues count? Cyclically non-backtracking walks (CNBW). backtracking cyclically non-backtracking

slide-53
SLIDE 53

EIGENVALUES AND CYCLES

What do the eigenvalues count? Cyclically non-backtracking walks (CNBW). backtracking cyclically non-backtracking

slide-54
SLIDE 54

EIGENVALUES AND CYCLES

What do the eigenvalues count? Cyclically non-backtracking walks (CNBW). backtracking cyclically non-backtracking

slide-55
SLIDE 55

EIGENVALUES AND CYCLES

What do the eigenvalues count? Cyclically non-backtracking walks (CNBW). backtracking cyclically non-backtracking

slide-56
SLIDE 56

EIGENVALUES AND CYCLES

What do the eigenvalues count? Cyclically non-backtracking walks (CNBW). backtracking cyclically non-backtracking

slide-57
SLIDE 57

EIGENVALUES AND CYCLES

What do the eigenvalues count? Cyclically non-backtracking walks (CNBW). backtracking cyclically non-backtracking

slide-58
SLIDE 58

COUNTING CNBW

(Friedman ’08, DJPP ’11) Nn

k = # CNBW’s of size k. Then

(2d − 1)−k/2Nn

k = n

  • i=1

Γk(λi), polynomial Γk. Here {λi, i ∈ [n]} are e-values of G(n, 2d)/2 √ 2d − 1.

slide-59
SLIDE 59

COUNTING CNBW

(Friedman ’08, DJPP ’11) Nn

k = # CNBW’s of size k. Then

(2d − 1)−k/2Nn

k = n

  • i=1

Γk(λi), polynomial Γk. Here {λi, i ∈ [n]} are e-values of G(n, 2d)/2 √ 2d − 1. Moreover w.h.p. all CNBW’s are repeated cycles.

slide-60
SLIDE 60

PROCESS OF EIGENVALUES

THEOREM (JOHNSON-P. ’12)

For any polynomial f,

  • tr f(G(s + t)), t ∈ R
  • converges to a linear combination of (Nk(t), k ∈ N).
slide-61
SLIDE 61

PROCESS OF EIGENVALUES

THEOREM (JOHNSON-P. ’12)

For any polynomial f,

  • tr f(G(s + t)), t ∈ R
  • converges to a linear combination of (Nk(t), k ∈ N).

Note the unusual Markov property.

slide-62
SLIDE 62

BACK TO CHEBYSHEV POLYNOMIALS

THEOREM (JOHNSON-P. ’12)

{Tk, k ∈ N} Chebyshev polys. lim

d→∞ lim s→∞ (tr Tk (G(s + t)) − E (·) , t ∈ R, k ∈ N)

= (Uk(t), t ∈ R, k ∈ N) . (Uk) indep stationary O-U: dUk(t) = −kUk(t)dt + kdWk(t), t ≥ 0.

slide-63
SLIDE 63

BACK TO CHEBYSHEV POLYNOMIALS

THEOREM (JOHNSON-P. ’12)

{Tk, k ∈ N} Chebyshev polys. lim

d→∞ lim s→∞ (tr Tk (G(s + t)) − E (·) , t ∈ R, k ∈ N)

= (Uk(t), t ∈ R, k ∈ N) . (Uk) indep stationary O-U: dUk(t) = −kUk(t)dt + kdWk(t), t ≥ 0. Moreover, for u < t, lim

d→∞ lim s→∞ Cov (tr Ti (G(s + t)) , tr Tj (G(s + u))) = δij

j 2ej(u−t).

slide-64
SLIDE 64

COMPARISON WITH WIGNER

(Borodin) Process of Wigner minors: Ti(t), Tj(s) = δij j 2 s t j/2 , s, t > 0. (Johnson-P .) Random regular graphs with CRP: Ti(t), Tj(s) = δij j 2ej(s−t), s, t ∈ R.

slide-65
SLIDE 65

COMPARISON WITH WIGNER

(Borodin) Process of Wigner minors: Ti(t), Tj(s) = δij j 2 s t j/2 , s, t > 0. (Johnson-P .) Random regular graphs with CRP: Ti(t), Tj(s) = δij j 2ej(s−t), s, t ∈ R. Deterministic time-change.

slide-66
SLIDE 66

COMPARISON WITH WIGNER

(Borodin) Process of Wigner minors: Ti(t), Tj(s) = δij j 2 s t j/2 , s, t > 0. (Johnson-P .) Random regular graphs with CRP: Ti(t), Tj(s) = δij j 2ej(s−t), s, t ∈ R. Deterministic time-change. Unfortunately, no GFF convergence yet ...

slide-67
SLIDE 67

An easy example

slide-68
SLIDE 68

AN EASY EXAMPLE

G(n, 2d). Take T1(x) = x. tr T1(G) =

n

  • i=1

λi = 2

d

  • i=1

mi, where mi = # fixed points in πi. |mi − Poi(1)| = ε(n),

  • i

mi ≈ Poi(d). The process of fixed point count ⇒ OU as d → ∞.

slide-69
SLIDE 69

SEVERAL OPEN PROBLEMS ...

Dynamic case: GFF convergence. Thank You

slide-70
SLIDE 70

SEVERAL OPEN PROBLEMS ...

Dynamic case: GFF convergence. Static case: How to capture eigenvalue distribution functions? No information about long walks/cycles known. Properties of eigenvectors. Gaussian law? Generalization to graphs with fixed-degree distribution. Thank You