Interacting diffusions
- n random sparse graphs
EBP July 2019 Guilherme Reis - Federal University of Bahia ghreis@impa.br Joint work with Roberto Imbuzeiro Oliveira - IMPA - rimfo@impa.br and Lucas Stolerman - UCSD - lstolerman@eng.ucsd.edu
Interacting diffusions on random sparse graphs EBP July 2019 - - PowerPoint PPT Presentation
Interacting diffusions on random sparse graphs EBP July 2019 Guilherme Reis - Federal University of Bahia ghreis@impa.br Joint work with Roberto Imbuzeiro Oliveira - IMPA - rimfo@impa.br and Lucas Stolerman - UCSD - lstolerman@eng.ucsd.edu
EBP July 2019 Guilherme Reis - Federal University of Bahia ghreis@impa.br Joint work with Roberto Imbuzeiro Oliveira - IMPA - rimfo@impa.br and Lucas Stolerman - UCSD - lstolerman@eng.ucsd.edu
I’m going to present a mean-field model of particle systems.
2 / 73
I’m going to present a mean-field model of particle systems. Mean-field means every particle has a direct interaction with all other particles.
3 / 73
I’m going to present a mean-field model of particle systems. Mean-field means every particle has a direct interaction with all other particles. My purpose in this talk is to show how we go beyond the mean-field case.
4 / 73
I’m going to present a mean-field model of particle systems. Mean-field means every particle has a direct interaction with all other particles. My purpose in this talk is to show how we go beyond the mean-field case. Remark: everything is loosely stated. I’m also going to present everything in a particular example.
5 / 73
I’m going to present a mean-field model of particle systems. Mean-field means every particle has a direct interaction with all other particles. My purpose in this talk is to show how we go beyond the mean-field case. Remark: everything is loosely stated. I’m also going to present everything in a particular example. First I will introduce the Kuramoto Model.
6 / 73
The Kuramoto model is the following system of ODEs (t ∈ [0, T]) : dθi(t) = 1 N
N
sin(θj(t) − θi(t))dt + ωidt, i ∈ {1, · · · , N}
7 / 73
The Kuramoto model is the following system of ODEs (t ∈ [0, T]) : dθi(t) = 1 N
N
sin(θj(t) − θi(t))dt + ωidt, i ∈ {1, · · · , N} Describes the path of the particle i in the time interval [0, T].
8 / 73
The Kuramoto model is the following system of ODEs (t ∈ [0, T]) : dθi(t) = 1 N
N
sin(θj(t) − θi(t))dt + ωidt, i ∈ {1, · · · , N} Describes the path of the particle i in the time interval [0, T]. Particle i interacts with all the other particles (Mean-Field). We normalize by N.
9 / 73
The Kuramoto model is the following system of ODEs (t ∈ [0, T]) : dθi(t) = 1 N
N
sin(θj(t) − θi(t))dt + ωidt, i ∈ {1, · · · , N} Describes the path of the particle i in the time interval [0, T]. Particle i interacts with all the other particles (Mean-Field). We normalize by N. Interaction between the particles.
10 / 73
The Kuramoto model is the following system of ODEs (t ∈ [0, T]) : dθi(t) = 1 N
N
sin(θj(t) − θi(t))dt + ωidt, i ∈ {1, · · · , N} Describes the path of the particle i in the time interval [0, T]. Particle i interacts with all the other particles (Mean-Field). We normalize by N. Interaction between the particles. Natural frequencies. In the absence of interaction the particles will evolve independently with velocities ωi.
11 / 73
The Kuramoto model is the following system of ODEs (t ∈ [0, T]) : dθi(t) = 1 N
N
sin(θj(t) − θi(t))dt + ωidt, i ∈ {1, · · · , N} Describes the path of the particle i in the time interval [0, T]. Particle i interacts with all the other particles (Mean-Field). We normalize by N. Interaction between the particles. Natural frequencies. In the absence of interaction the particles will evolve independently with velocities ωi. Mean-field equals complete graph.
12 / 73
Lucas and Roberto were very interested in the Kuramoto Model. They were trying to understand the behaviour of neuronal populations under epileptic phenomenon.
13 / 73
Lucas and Roberto were very interested in the Kuramoto Model. They were trying to understand the behaviour of neuronal populations under epileptic phenomenon. The thing is that the Kuramoto Model is famous to understand synchronization: https://www.youtube.com/watch?v=T58lGKREubo
14 / 73
Lucas and Roberto were very interested in the Kuramoto Model. They were trying to understand the behaviour of neuronal populations under epileptic phenomenon. The thing is that the Kuramoto Model is famous to understand synchronization: https://www.youtube.com/watch?v=T58lGKREubo In the works I’m going to present we go in other direction. We do NOT have any theorems about synchronization here (despite we have some simulations).
15 / 73
Lucas and Roberto were very interested in the Kuramoto Model. They were trying to understand the behaviour of neuronal populations under epileptic phenomenon. The thing is that the Kuramoto Model is famous to understand synchronization: https://www.youtube.com/watch?v=T58lGKREubo In the works I’m going to present we go in other direction. We do NOT have any theorems about synchronization here (despite we have some simulations). Since our spins live on R (and not in S1) we need that t ∈ [0, T]. Our contribution is to remove the mean-field assumption.
16 / 73
5 4 3 2 1 Start with a graph G.
5 4 3 2 1 R θ1(t) Start with a graph G. Think that each particle is on a vertex. It is like a spin in R evolving in time.
5 4 3 2 1 R θ1(t) R θ2(t) Start with a graph G. Think that each particle is on a vertex. It is like a spin in R evolving in time.
5 4 3 2 1 R θ1(t) R θ2(t) R θ3(t) Start with a graph G. Think that each particle is on a vertex. It is like a spin in R evolving in time.
5 4 3 2 1 R θ1(t) R θ2(t) R θ3(t) R θ4(t) Start with a graph G. Think that each particle is on a vertex. It is like a spin in R evolving in time.
5 4 3 2 1 R θ1(t) R θ2(t) R θ3(t) R θ4(t) R θ5(t) Start with a graph G. Think that each particle is on a vertex. It is like a spin in R evolving in time.
22 / 73
5 4 3 2 1 R θ1(t) R θ2(t) R θ3(t) R θ4(t) R θ5(t) Start with a graph G. Think that each particle is on a vertex. It is like a spin in R evolving in time. We want the Kuramoto model with interactions given by G.
23 / 73
The particles follow the following system of SDEs (t ∈ [0, T]): dθi(t) = 1 di
sin(θj(t) − θi(t))dt + ωidt + dBi(t), ∀i ∈ G.
24 / 73
The particles follow the following system of SDEs (t ∈ [0, T]): dθi(t) = 1 di
sin(θj(t) − θi(t))dt + ωidt + dBi(t), ∀i ∈ G. Describes the evolution of the random paths of each particle.
25 / 73
The particles follow the following system of SDEs (t ∈ [0, T]): dθi(t) = 1 di
sin(θj(t) − θi(t))dt + ωidt + dBi(t), ∀i ∈ G. Describes the evolution of the random paths of each particle. Particle i interacts with its neighbours. We normalize by di.
26 / 73
The particles follow the following system of SDEs (t ∈ [0, T]): dθi(t) = 1 di
sin(θj(t) − θi(t))dt + ωidt + dBi(t), ∀i ∈ G. Describes the evolution of the random paths of each particle. Particle i interacts with its neighbours. We normalize by di. The same interaction function of the mean-field case.
27 / 73
The particles follow the following system of SDEs (t ∈ [0, T]): dθi(t) = 1 di
sin(θj(t) − θi(t))dt + ωidt + dBi(t), ∀i ∈ G. Describes the evolution of the random paths of each particle. Particle i interacts with its neighbours. We normalize by di. The same interaction function of the mean-field case. Natural frequencies.
28 / 73
The particles follow the following system of SDEs (t ∈ [0, T]): dθi(t) = 1 di
sin(θj(t) − θi(t))dt + ωidt + dBi(t), ∀i ∈ G. Describes the evolution of the random paths of each particle. Particle i interacts with its neighbours. We normalize by di. The same interaction function of the mean-field case. Natural frequencies. Independent standard Brownian motions.
29 / 73
The particles follow the following system of SDEs (t ∈ [0, T]): dθi(t) = 1 di
sin(θj(t) − θi(t))dt + ωidt + dBi(t), ∀i ∈ G. Describes the evolution of the random paths of each particle. Particle i interacts with its neighbours. We normalize by di. The same interaction function of the mean-field case. Natural frequencies. Independent standard Brownian motions. What do we want to prove?
30 / 73
31 / 73
Take a sequence of graphs (GN)N≥1. Each GN has N ∈ N vertices.
32 / 73
Take a sequence of graphs (GN)N≥1. Each GN has N ∈ N vertices. Let (θGN
i
)i∈[N] be the particles defined as before on the graph GN.
33 / 73
Take a sequence of graphs (GN)N≥1. Each GN has N ∈ N vertices. Let (θGN
i
)i∈[N] be the particles defined as before on the graph GN. For each N ≥ 1 we define the empirical random measure over trajectories LGN: LGN = 1 N
N
δθ
GN i 34 / 73
Take a sequence of graphs (GN)N≥1. Each GN has N ∈ N vertices. Let (θGN
i
)i∈[N] be the particles defined as before on the graph GN. For each N ≥ 1 we define the empirical random measure over trajectories LGN: LGN = 1 N
N
δθ
GN i
GOAL: To prove that LGN converges to something, possibly satisfying LDP. We also want to see how the limit depends on the sequence (GN)N∈N.
35 / 73
Consider KN, the complete graph with N vertices.
36 / 73
Consider KN, the complete graph with N vertices. Dai Pra and Den Hollander (Journal of Statistical Physics-96):
37 / 73
Consider KN, the complete graph with N vertices. Dai Pra and Den Hollander (Journal of Statistical Physics-96): (LKN)N∈N converges to a McKean-Vlasov diffusion (a kind of self interacting process that is important in the literature).
38 / 73
Consider KN, the complete graph with N vertices. Dai Pra and Den Hollander (Journal of Statistical Physics-96): (LKN)N∈N converges to a McKean-Vlasov diffusion (a kind of self interacting process that is important in the literature). More specifically, (LKN)N≥1 satisfy LDP with rate function I which has the MV diffusion as unique minimizer.
39 / 73
Consider KN, the complete graph with N vertices. Dai Pra and Den Hollander (Journal of Statistical Physics-96): (LKN)N∈N converges to a McKean-Vlasov diffusion (a kind of self interacting process that is important in the literature). More specifically, (LKN)N≥1 satisfy LDP with rate function I which has the MV diffusion as unique minimizer. Summing up: we have SLLN with exponential rates of convergence and a nice description of the limit object.
40 / 73
Our Goal: to extend the previous nice result for other graphs.
41 / 73
Our Goal: to extend the previous nice result for other graphs. Example: GN is an Erd¨
enyi graph with N vertices and edge probability p(N) ∈ [0, 1]. The mean degree is roughly Np(N).
42 / 73
Our Goal: to extend the previous nice result for other graphs. Example: GN is an Erd¨
enyi graph with N vertices and edge probability p(N) ∈ [0, 1]. The mean degree is roughly Np(N). Main achievement: In two different works, Oliveira-R.(Journal of Statistical Physics-19) and Oliveira-Stolerman-R.(Arxiv-18), we proved that
43 / 73
Our Goal: to extend the previous nice result for other graphs. Example: GN is an Erd¨
enyi graph with N vertices and edge probability p(N) ∈ [0, 1]. The mean degree is roughly Np(N). Main achievement: In two different works, Oliveira-R.(Journal of Statistical Physics-19) and Oliveira-Stolerman-R.(Arxiv-18), we proved that lim
N→∞ LGN is determined by either lim N→∞ Np(N) = ∞ or
lim
N→∞ Np(N) = c < ∞.
44 / 73
lim
N→∞ Np(N)
= +∞ = c < +∞ GN “looks like” KN GW (c) tree In the sense AKN and AGn Benjamini-Schramm are “close” very nicely LGN → ?? McKean-Vlasov New object Limit Theorem LDP Hydrodynamic limit (SLLN) Propagation of Chaos Sync? ?? NO (through simulations)
45 / 73
Before giving more details, let’s see some past results.
46 / 73
Before giving more details, let’s see some past results. To obtain a limit:
47 / 73
Before giving more details, let’s see some past results. To obtain a limit: Medvedev [Med18] assumes Np(N) ≫ √ N in the noiseless case.
48 / 73
Before giving more details, let’s see some past results. To obtain a limit: Medvedev [Med18] assumes Np(N) ≫ √ N in the noiseless case. Coppini, Dietert and Giacomin (Arxiv-18) proved the same kind of result assuming Np(N) ≫ log N (with other weaker assumptions)
49 / 73
Before giving more details, let’s see some past results. To obtain a limit: Medvedev [Med18] assumes Np(N) ≫ √ N in the noiseless case. Coppini, Dietert and Giacomin (Arxiv-18) proved the same kind of result assuming Np(N) ≫ log N (with other weaker assumptions) We are NOT assuming any growth condition on the divergence of Np(N).
50 / 73
Before giving more details, let’s see some past results. To obtain a limit: Medvedev [Med18] assumes Np(N) ≫ √ N in the noiseless case. Coppini, Dietert and Giacomin (Arxiv-18) proved the same kind of result assuming Np(N) ≫ log N (with other weaker assumptions) We are NOT assuming any growth condition on the divergence of Np(N). It was the first result for this kind of model when Np(N) → c < ∞. A few months later there is a result of Kavita and others (Arxiv-18).
51 / 73
Now I will try to be more precise about the results.
52 / 73
Now I will try to be more precise about the results. I will also try to give a very rough idea about the proofs.
53 / 73
Now I will try to be more precise about the results. I will also try to give a very rough idea about the proofs. Due time limit we will discuss the case when Np(N) → c < ∞.
54 / 73
The sequence (GN)N≥1 “locally looks like” the random GW(c) tree (T , o) in the sense that for every reference rooted graph (H, p) lim
N→∞
1 N #
55 / 73
The sequence (GN)N≥1 “locally looks like” the random GW(c) tree (T , o) in the sense that for every reference rooted graph (H, p) lim
N→∞
1 N #
We can guess (heuristic): lim
N→∞
1 N #
i
= θ(H,p)R
p
p
56 / 73
The sequence (GN)N≥1 “locally looks like” the random GW(c) tree (T , o) in the sense that for every reference rooted graph (H, p) lim
N→∞
1 N #
We can guess (heuristic): lim
N→∞
1 N #
i
= θ(H,p)R
p
p
If we have the following (in a “good” way): θ(GN,i)R
i
≍ θ(GN,i)∞
i
= θGN
i
and θ(T ,o)R
The sequence (GN)N≥1 “locally looks like” the random GW(c) tree (T , o) in the sense that for every reference rooted graph (H, p) lim
N→∞
1 N #
We can guess (heuristic): lim
N→∞
1 N #
i
= θ(H,p)R
p
p
If we have the following (in a “good” way): θ(GN,i)R
i
≍ θ(GN,i)∞
i
= θGN
i
and θ(T ,o)R
1 N
N
δθ
GN i
→ Law of θT
58 / 73
New object limit: We can solve for a.e. realization of (T , o) (with uniqueness) the possible infinite system of SDEs obtaining the process
v
59 / 73
New object limit: We can solve for a.e. realization of (T , o) (with uniqueness) the possible infinite system of SDEs obtaining the process
v
SLLN: For any bounded Lipschitz function h and almost surely 1 N
N
h
i
60 / 73
New object limit: We can solve for a.e. realization of (T , o) (with uniqueness) the possible infinite system of SDEs obtaining the process
v
SLLN: For any bounded Lipschitz function h and almost surely 1 N
N
h
i
Propagation of chaos: Let (T1, o1), · · · , (Tk, ok) i.i.d. GW(c) trees. For bounded Lipschitz functions f1, · · · , fk lim
N→∞
1 Nk
N
E k
fi(θGN
ui )
k
E
61 / 73
All the results are based on the fact that we can do the approximations:
62 / 73
All the results are based on the fact that we can do the approximations: θ(GN,i)R
i
≍ θ(GN,i)∞
i
= θGN
i
and θ(T ,o)R
63 / 73
All the results are based on the fact that we can do the approximations: θ(GN,i)R
i
≍ θ(GN,i)∞
i
= θGN
i
and θ(T ,o)R
For example, to prove SLLN we create intermediate steps
64 / 73
All the results are based on the fact that we can do the approximations: θ(GN,i)R
i
≍ θ(GN,i)∞
i
= θGN
i
and θ(T ,o)R
For example, to prove SLLN we create intermediate steps 1 N
N
h
i
N
N
h
i
65 / 73
All the results are based on the fact that we can do the approximations: θ(GN,i)R
i
≍ θ(GN,i)∞
i
= θGN
i
and θ(T ,o)R
For example, to prove SLLN we create intermediate steps 1 N
N
h
i
N
N
h
i
Using these intermediate steps we can use classical concentration inequalities for functions of i.i.d. Brownian motions.
66 / 73
All the results are based on the fact that we can do the approximations: θ(GN,i)R
i
≍ θ(GN,i)∞
i
= θGN
i
and θ(T ,o)R
For example, to prove SLLN we create intermediate steps 1 N
N
h
i
N
N
h
i
Using these intermediate steps we can use classical concentration inequalities for functions of i.i.d. Brownian motions. Next I will tell about the “approximations.”
67 / 73
Now I will tell you how to “prove” that θ(GN,i)R
i
≍ θ(GN,i)∞
i
68 / 73
Now I will tell you how to “prove” that θ(GN,i)R
i
≍ θ(GN,i)∞
i
Using a extended version of Gronwall’s inequality we can show that |θGN
i
(t) − θ(GN,i)R
i
(t)| ≤ PSRW
i
{reaches ∂(GN, i)R) at time t}
69 / 73
Now I will tell you how to “prove” that θ(GN,i)R
i
≍ θ(GN,i)∞
i
Using a extended version of Gronwall’s inequality we can show that |θGN
i
(t) − θ(GN,i)R
i
(t)| ≤ PSRW
i
{reaches ∂(GN, i)R) at time t} Using a bound for the SRW on GN we conclude that |θGN
i
(t) − θ(GN,i)R
i
(t)| ≤ |∂(GN, i)R| exp
T log R T
70 / 73
71 / 73
[CDG18] F. Coppini, H. Dietert, and G. Giacomin. A Law of Large Numbers and Large Deviations for interacting diffusions
[Med18] G. S. Medvedev. The continuum limit of the Kuramoto model on sparse directed graphs. ArXiv e-prints, February 2018. [ORS18] Roberto I. Oliveira, Guilherme H. Reis, and Lucas M.
hydrodynamics from local weak limits. arXiv e-prints, page arXiv:1812.11924, Dec 2018.
72 / 73
[RO18]
random graphs with diverging degrees: hydrodynamics and large deviations. ArXiv e-prints, July 2018.
73 / 73