T HE PERMUTATION MODEL G ( n , 2 d ) 1 , . . . , d iid uniform - - PowerPoint PPT Presentation
T HE PERMUTATION MODEL G ( n , 2 d ) 1 , . . . , d iid uniform - - PowerPoint PPT Presentation
E IGENVALUES OF SPARSE RANDOM REGULAR GRAPHS Soumik Pal University of Washington Seattle Seminar on Stochastic Processes, 2012 G RAPHS AND ADJACENCY MATRICES 3 2 Undirected graphs on n labeled vertices. 4 1 Regular: degree d . 5 6
GRAPHS AND ADJACENCY MATRICES
Undirected graphs on n labeled vertices. Regular: degree d. Adjacency matrix = n × n symmetric matrix. Sparse - d ≪ n.
1 2 3 4 5 6 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
MODELS OF RANDOM REGULAR GRAPHS
The permutation model: G(n, 2). π - random permutation on [n]. 2-regular graph:
1 2 4 3 8 5 10 6 7 9
THE PERMUTATION MODEL G(n, 2d)
π1, . . . , πd iid uniform permutations. Superimpose.
THE PERMUTATION MODEL G(n, 2d)
π1, . . . , πd iid uniform permutations. Superimpose. 1 2 3 4 5 π1 = π2 =
THE PERMUTATION MODEL G(n, 2d)
π1, . . . , πd iid uniform permutations. Superimpose. 1 2 3 4 5 π1 = (1 3 2)(4 5) π2 =
THE PERMUTATION MODEL G(n, 2d)
π1, . . . , πd iid uniform permutations. Superimpose. 1 2 3 4 5 π1 = (1 3 2)(4 5) π2 = (1 4 2)
THE PERMUTATION MODEL G(n, 2d)
π1, . . . , πd iid uniform permutations. Superimpose. 1 2 3 4 5 π1 = (1 3 2)(4 5) π2 = (1 4 2) Multiple edges, loops OK.
SPECTRAL GRAPH THEORY
Eigenvalues of the adjacency matrix. Why care? Test RMT universality: McKay ’81, Dumitriu-P . ’10, Tran-Vu-Wang’10 Ben Arous-Dang ’11, Erdos-Knowles-Yau ’11 Expansion properties: Broder-Shamir ’87, Friedman ’91, ’08 Quantum chaos: Smilansky ’10, Elon-Smilansky ’10 Simulations: Jacobson et al. ’99, Miller et al. ’08. and many more . . .
RANDOM MATRIX THEORY
A Wigner matrix is a square random matrix with
RANDOM MATRIX THEORY
A Wigner matrix is a square random matrix with upper triangular entries chosen independently; −0.6 0.7 0.1 0.6 2.1 2.5 − 0.1 −2.2 1.1 − 0.6 A sample of a 4 × 4 Wigner matrix.
RANDOM MATRIX THEORY
A Wigner matrix is a square random matrix with upper triangular entries chosen independently; symmetric. −0.6 0.7 0.1 0.6 0.7 2.1 2.5 − 0.1 0.1 2.5 −2.2 1.1 0.6 − 0.1 1.1 − 0.6 A sample of a 4 × 4 Wigner matrix.
RANDOM MATRIX THEORY
A Wigner matrix is a square random matrix with upper triangular entries chosen independently; symmetric. Minor=principal submatrix, also Wigner. −0.6 0.7 0.1 0.6 0.7 2.1 2.5 − 0.1 0.1 2.5 −2.2 1.1 0.6 − 0.1 1.1 − 0.6 A sample of a 4 × 4 Wigner matrix.
EIGENVALUE FLUCTUATIONS
W∞ - Wigner array. Wn - n × n minor. E-values {λn
i }.
Linear eigenvalue statistics tr f (Wn) :=
n
- i=1
f λn
i
2√n
- .
(Classical Theorem) If f is analytic lim
n→∞ [tr f (Wn) − E tr f (Wn)] = N
- 0, σ2
f
- .
EIGENVALUE FLUCTUATIONS
W∞ - Wigner array. Wn - n × n minor. E-values {λn
i }.
Linear eigenvalue statistics tr f (Wn) :=
n
- i=1
f λn
i
2√n
- .
(Classical Theorem) If f is analytic lim
n→∞ [tr f (Wn) − E tr f (Wn)] = N
- 0, σ2
f
- .
Eigenvalue repulsion.
JOINT CONVERGENCE ACROSS MINORS
Results by Borodin ’10. Choose 0 < t1 ≤ t2 ≤ . . . ≤ tk and f1, f2, . . . , fk. lim
n→∞
- tr fi
- W⌊nti⌋
- − E tr fi (·) , i ∈ [k]
- = Gaussian.
Mean zero. Covariance kernel?
JOINT CONVERGENCE ACROSS MINORS
Results by Borodin ’10. Choose 0 < t1 ≤ t2 ≤ . . . ≤ tk and f1, f2, . . . , fk. lim
n→∞
- tr fi
- W⌊nti⌋
- − E tr fi (·) , i ∈ [k]
- = Gaussian.
Mean zero. Covariance kernel?
‘Limiting fluctuation of the Height Function is the Gaussian Free Field.’
CHEBYSHEV POLYNOMIALS - FIRST KIND
Tn(x), n ≥ 0 - Poly of degree n. Orthogonal polynomials on [−1, 1]. Weight measure: Arc-Sine law. Fourier expansion: f =
n cnTn.
1.0 0.5 0.5 1.0 1.0 0.5 0.5 1.0
FIGURE: T1, T3, T8
CHEBYSHEV POLYNOMIALS - FIRST KIND
Tn(x), n ≥ 0 - Poly of degree n. Orthogonal polynomials on [−1, 1]. Weight measure: Arc-Sine law. Fourier expansion: f =
n cnTn.
1.0 0.5 0.5 1.0 1.0 0.5 0.5 1.0
FIGURE: T1, T3, T8
(Borodin ’10) Fix s ≤ t and i, j ∈ N. lim
n→∞ Cov
- tr Ti
- W⌊nt⌋
- , tr Tj
- W⌊ns⌋
- = δij
j 2 s t j/2 .
Main Results - (with Toby Johnson ’12)
CHINESE RESTAURANT PROCESS
(Dubins-Pitman) Randomly grows a permutation. Every point adds neighbor to left at rate one. New fixed points at rate one. σ = (1)
CHINESE RESTAURANT PROCESS
(Dubins-Pitman) Randomly grows a permutation. Every point adds neighbor to left at rate one. New fixed points at rate one. σ = (1)(2)
CHINESE RESTAURANT PROCESS
(Dubins-Pitman) Randomly grows a permutation. Every point adds neighbor to left at rate one. New fixed points at rate one. σ = (1)(2 3)
CHINESE RESTAURANT PROCESS
(Dubins-Pitman) Randomly grows a permutation. Every point adds neighbor to left at rate one. New fixed points at rate one. σ = (1)(2 4 3)
CHINESE RESTAURANT PROCESS
(Dubins-Pitman) Randomly grows a permutation. Every point adds neighbor to left at rate one. New fixed points at rate one. σ = (1)(2 4 3)(5)
CHINESE RESTAURANT PROCESS
(Dubins-Pitman) Randomly grows a permutation. Every point adds neighbor to left at rate one. New fixed points at rate one. σ = (1)(2 4 3)(5)(6)
CHINESE RESTAURANT PROCESS
(Dubins-Pitman) Randomly grows a permutation. Every point adds neighbor to left at rate one. New fixed points at rate one. σ = (1)(2 4 3)(5 7)(6)
CHINESE RESTAURANT PROCESS
(Dubins-Pitman) Randomly grows a permutation. Every point adds neighbor to left at rate one. New fixed points at rate one. σ = (1)(2 4 3)(5 8 7)(6)
CHINESE RESTAURANT PROCESS
(Dubins-Pitman) Randomly grows a permutation. Every point adds neighbor to left at rate one. New fixed points at rate one. σ = (1)(2 4 3 9)(5 8 7)(6)
CHINESE RESTAURANT PROCESS
(Dubins-Pitman) Randomly grows a permutation. Every point adds neighbor to left at rate one. New fixed points at rate one. σ = (1)(2 4 3 9)(5 10 8 7)(6)
CYCLES AND EIGENVALUES
Case of G(n, 2). Adjacency matrix - blocks of cycles. E-values of cycle of size k:
- cos
2πj k
- , j ∈ [k]
- .
CYCLES AND EIGENVALUES
Case of G(n, 2). Adjacency matrix - blocks of cycles. E-values of cycle of size k:
- cos
2πj k
- , j ∈ [k]
- .
Under the CRP: Cycle of size k − → size (k + 1) at rate k. New cycle size 1 at rate 1. Stochastic process of eigenvalues. Kerov-Olshanki-Vershik ’93 Olshanki-Vershik ’96, Bourgade-Najnudel-Nikeghbali ’11
THE CASE OF GENERAL G(n, 2d)
Simultaneous insertion in (π1, π2, . . . , πd) by CRP . Let Ti = Exp(i), i ∈ N, nt = max
- m :
m
- i=1
Ti ≤ t
- .
G(t) := G(nt, 2d). No cyclic decomposition.
GROWTH OF A CYCLE
(Johnson-P . ’12) Existing cycles grow in size. 1 2 3 4 5 π1 π1 π2 π1 π2 π1 = (1 2 3)(4 5) π2 = (1 5)(4 3)(2) 1 2 3 4 5 6 π1 π1 π1 π2 π1 π2 π1 = (1 2 6 3)(4 5) π2 = (1 5)(4 3)(2 6)
FIGURE: Vertex 6 is inserted between 2 and 3 in π1.
BIRTH OF A CYCLE
1 2 3 4 5 π2 π1 π2 π1 π1 = (2 3 1)(4 5) π2 = (2 1 3 4 5) 1 2 3 4 5 6 π2 π3 π2 π1 π1 π2 π1 = (2 3 1 6)(4 5) π2 = (2 1 3 4 6 5)
FIGURE: A cycle forms “spontaneously”.
CYCLE COUNTS
C(s)
k (t) = # k-cycles in G(s + t).
Non-Markovian process in t, with s fixed.
CYCLE COUNTS
C(s)
k (t) = # k-cycles in G(s + t).
Non-Markovian process in t, with s fixed. (C(s)
k (t), k ∈ N, −∞ < t < ∞) converges as s → ∞.
Limiting process (Nk(t), k ∈ N, −∞ < t < ∞) is Markov. Running in stationarity.
THE LIMITING PROCESS
(Johnson-P . ’12) In the limit: Existing k-cycles grows to (k + 1) at rate k. New k-cycles created at rate µ(k) ⊗ Leb. Here: µ(k) = 1 2 [a(d, k) − a(d, k − 1)] , k ∈ N, a(d, 0) := 0, where a(d, k) =
- (2d − 1)k − 1 + 2d,
k even, (2d − 1)k + 1, k odd.
MORE FORMALLY ...
(Johnson-P . ’12) Poisson point process χ on N × (−∞, ∞). Intensity µ ⊗ Leb. Yule process: Lf(k) = k (f(k + 1) − f(k)) , k ∈ N.
MORE FORMALLY ...
(Johnson-P . ’12) Poisson point process χ on N × (−∞, ∞). Intensity µ ⊗ Leb. Yule process: Lf(k) = k (f(k + 1) − f(k)) , k ∈ N. For (k, y) ∈ χ, start indep (Xk,y(t), t ≥ 0). Yule process from k.
MORE FORMALLY ...
(Johnson-P . ’12) Poisson point process χ on N × (−∞, ∞). Intensity µ ⊗ Leb. Yule process: Lf(k) = k (f(k + 1) − f(k)) , k ∈ N. For (k, y) ∈ χ, start indep (Xk,y(t), t ≥ 0). Yule process from k. Define Nk(t) :=
- (j,y)∈χ∩{[k]×(−∞,t]}
1 {Xj,y(t − y) = k} .
INVARIANT DISTRIBUTION
(C(s)
k (t), k ∈ N, t ∈ R) s
− → (Nk(t), k ∈ N, t ∈ R). Marginal distribution: (Nk(t), k ∈ N) ∼ ⊗Poi a(d, k) 2k
- .
INVARIANT DISTRIBUTION
(C(s)
k (t), k ∈ N, t ∈ R) s
− → (Nk(t), k ∈ N, t ∈ R). Marginal distribution: (Nk(t), k ∈ N) ∼ ⊗Poi a(d, k) 2k
- .
Dumitriu-Johnson-P .-Paquette ’11 Bollobás ’80, Wormald ’81.
EIGENVALUES AND CYCLES
EIGENVALUES AND CYCLES
What do the eigenvalues count?
EIGENVALUES AND CYCLES
What do the eigenvalues count? Cyclically non-backtracking walks (CNBW).
EIGENVALUES AND CYCLES
What do the eigenvalues count? Cyclically non-backtracking walks (CNBW). backtracking
EIGENVALUES AND CYCLES
What do the eigenvalues count? Cyclically non-backtracking walks (CNBW). backtracking
EIGENVALUES AND CYCLES
What do the eigenvalues count? Cyclically non-backtracking walks (CNBW). backtracking
EIGENVALUES AND CYCLES
What do the eigenvalues count? Cyclically non-backtracking walks (CNBW). backtracking cyclically non-backtracking
EIGENVALUES AND CYCLES
What do the eigenvalues count? Cyclically non-backtracking walks (CNBW). backtracking cyclically non-backtracking
EIGENVALUES AND CYCLES
What do the eigenvalues count? Cyclically non-backtracking walks (CNBW). backtracking cyclically non-backtracking
EIGENVALUES AND CYCLES
What do the eigenvalues count? Cyclically non-backtracking walks (CNBW). backtracking cyclically non-backtracking
EIGENVALUES AND CYCLES
What do the eigenvalues count? Cyclically non-backtracking walks (CNBW). backtracking cyclically non-backtracking
EIGENVALUES AND CYCLES
What do the eigenvalues count? Cyclically non-backtracking walks (CNBW). backtracking cyclically non-backtracking
EIGENVALUES AND CYCLES
What do the eigenvalues count? Cyclically non-backtracking walks (CNBW). backtracking cyclically non-backtracking
EIGENVALUES AND CYCLES
What do the eigenvalues count? Cyclically non-backtracking walks (CNBW). backtracking cyclically non-backtracking
COUNTING CNBW
(Friedman ’08, DJPP ’11) Nn
k = # CNBW’s of size k. Then
(2d − 1)−k/2Nn
k = n
- i=1
Γk(λi), polynomial Γk. Here {λi, i ∈ [n]} are e-values of G(n, 2d)/2 √ 2d − 1.
COUNTING CNBW
(Friedman ’08, DJPP ’11) Nn
k = # CNBW’s of size k. Then
(2d − 1)−k/2Nn
k = n
- i=1
Γk(λi), polynomial Γk. Here {λi, i ∈ [n]} are e-values of G(n, 2d)/2 √ 2d − 1. Moreover w.h.p. all CNBW’s are repeated cycles.
PROCESS OF EIGENVALUES
THEOREM (JOHNSON-P. ’12)
For any polynomial f,
- tr f(G(s + t)), t ∈ R
- converges to a linear combination of (Nk(t), k ∈ N).
PROCESS OF EIGENVALUES
THEOREM (JOHNSON-P. ’12)
For any polynomial f,
- tr f(G(s + t)), t ∈ R
- converges to a linear combination of (Nk(t), k ∈ N).
Note the unusual Markov property.
BACK TO CHEBYSHEV POLYNOMIALS
THEOREM (JOHNSON-P. ’12)
{Tk, k ∈ N} Chebyshev polys. lim
d→∞ lim s→∞ (tr Tk (G(s + t)) − E (·) , t ∈ R, k ∈ N)
= (Uk(t), t ∈ R, k ∈ N) . (Uk) indep stationary O-U: dUk(t) = −kUk(t)dt + kdWk(t), t ≥ 0.
BACK TO CHEBYSHEV POLYNOMIALS
THEOREM (JOHNSON-P. ’12)
{Tk, k ∈ N} Chebyshev polys. lim
d→∞ lim s→∞ (tr Tk (G(s + t)) − E (·) , t ∈ R, k ∈ N)
= (Uk(t), t ∈ R, k ∈ N) . (Uk) indep stationary O-U: dUk(t) = −kUk(t)dt + kdWk(t), t ≥ 0. Moreover, for u < t, lim
d→∞ lim s→∞ Cov (tr Ti (G(s + t)) , tr Tj (G(s + u))) = δij
j 2ej(u−t).
COMPARISON WITH WIGNER
(Borodin) Process of Wigner minors: Ti(t), Tj(s) = δij j 2 s t j/2 , s, t > 0. (Johnson-P .) Random regular graphs with CRP: Ti(t), Tj(s) = δij j 2ej(s−t), s, t ∈ R.
COMPARISON WITH WIGNER
(Borodin) Process of Wigner minors: Ti(t), Tj(s) = δij j 2 s t j/2 , s, t > 0. (Johnson-P .) Random regular graphs with CRP: Ti(t), Tj(s) = δij j 2ej(s−t), s, t ∈ R. Deterministic time-change.
COMPARISON WITH WIGNER
(Borodin) Process of Wigner minors: Ti(t), Tj(s) = δij j 2 s t j/2 , s, t > 0. (Johnson-P .) Random regular graphs with CRP: Ti(t), Tj(s) = δij j 2ej(s−t), s, t ∈ R. Deterministic time-change. Unfortunately, no GFF convergence yet ...
An easy example
AN EASY EXAMPLE
G(n, 2d). Take T1(x) = x. tr T1(G) =
n
- i=1
λi = 2
d
- i=1
mi, where mi = # fixed points in πi. |mi − Poi(1)| = ε(n),
- i