Interacting diffusions on random sparse graphs EBP July 2019 - - PowerPoint PPT Presentation

interacting diffusions on random sparse graphs
SMART_READER_LITE
LIVE PREVIEW

Interacting diffusions on random sparse graphs EBP July 2019 - - PowerPoint PPT Presentation

Interacting diffusions on random sparse graphs EBP July 2019 Guilherme Reis - Federal University of Bahia ghreis@impa.br Joint work with Roberto Imbuzeiro Oliveira - IMPA - rimfo@impa.br and Lucas Stolerman - UCSD - lstolerman@eng.ucsd.edu


slide-1
SLIDE 1

Interacting diffusions

  • n random sparse graphs

EBP July 2019 Guilherme Reis - Federal University of Bahia ghreis@impa.br Joint work with Roberto Imbuzeiro Oliveira - IMPA - rimfo@impa.br and Lucas Stolerman - UCSD - lstolerman@eng.ucsd.edu

slide-2
SLIDE 2

Outline

I’m going to present a mean-field model of particle systems.

2 / 73

slide-3
SLIDE 3

Outline

I’m going to present a mean-field model of particle systems. Mean-field means every particle has a direct interaction with all other particles.

3 / 73

slide-4
SLIDE 4

Outline

I’m going to present a mean-field model of particle systems. Mean-field means every particle has a direct interaction with all other particles. My purpose in this talk is to show how we go beyond the mean-field case.

4 / 73

slide-5
SLIDE 5

Outline

I’m going to present a mean-field model of particle systems. Mean-field means every particle has a direct interaction with all other particles. My purpose in this talk is to show how we go beyond the mean-field case. Remark: everything is loosely stated. I’m also going to present everything in a particular example.

5 / 73

slide-6
SLIDE 6

Outline

I’m going to present a mean-field model of particle systems. Mean-field means every particle has a direct interaction with all other particles. My purpose in this talk is to show how we go beyond the mean-field case. Remark: everything is loosely stated. I’m also going to present everything in a particular example. First I will introduce the Kuramoto Model.

6 / 73

slide-7
SLIDE 7

The Kuramoto model

The Kuramoto model is the following system of ODEs (t ∈ [0, T]) : dθi(t) = 1 N

N

  • j=1

sin(θj(t) − θi(t))dt + ωidt, i ∈ {1, · · · , N}

7 / 73

slide-8
SLIDE 8

The Kuramoto model

The Kuramoto model is the following system of ODEs (t ∈ [0, T]) : dθi(t) = 1 N

N

  • j=1

sin(θj(t) − θi(t))dt + ωidt, i ∈ {1, · · · , N} Describes the path of the particle i in the time interval [0, T].

8 / 73

slide-9
SLIDE 9

The Kuramoto model

The Kuramoto model is the following system of ODEs (t ∈ [0, T]) : dθi(t) = 1 N

N

  • j=1

sin(θj(t) − θi(t))dt + ωidt, i ∈ {1, · · · , N} Describes the path of the particle i in the time interval [0, T]. Particle i interacts with all the other particles (Mean-Field). We normalize by N.

9 / 73

slide-10
SLIDE 10

The Kuramoto model

The Kuramoto model is the following system of ODEs (t ∈ [0, T]) : dθi(t) = 1 N

N

  • j=1

sin(θj(t) − θi(t))dt + ωidt, i ∈ {1, · · · , N} Describes the path of the particle i in the time interval [0, T]. Particle i interacts with all the other particles (Mean-Field). We normalize by N. Interaction between the particles.

10 / 73

slide-11
SLIDE 11

The Kuramoto model

The Kuramoto model is the following system of ODEs (t ∈ [0, T]) : dθi(t) = 1 N

N

  • j=1

sin(θj(t) − θi(t))dt + ωidt, i ∈ {1, · · · , N} Describes the path of the particle i in the time interval [0, T]. Particle i interacts with all the other particles (Mean-Field). We normalize by N. Interaction between the particles. Natural frequencies. In the absence of interaction the particles will evolve independently with velocities ωi.

11 / 73

slide-12
SLIDE 12

The Kuramoto model

The Kuramoto model is the following system of ODEs (t ∈ [0, T]) : dθi(t) = 1 N

N

  • j=1

sin(θj(t) − θi(t))dt + ωidt, i ∈ {1, · · · , N} Describes the path of the particle i in the time interval [0, T]. Particle i interacts with all the other particles (Mean-Field). We normalize by N. Interaction between the particles. Natural frequencies. In the absence of interaction the particles will evolve independently with velocities ωi. Mean-field equals complete graph.

12 / 73

slide-13
SLIDE 13

Motivation

Lucas and Roberto were very interested in the Kuramoto Model. They were trying to understand the behaviour of neuronal populations under epileptic phenomenon.

13 / 73

slide-14
SLIDE 14

Motivation

Lucas and Roberto were very interested in the Kuramoto Model. They were trying to understand the behaviour of neuronal populations under epileptic phenomenon. The thing is that the Kuramoto Model is famous to understand synchronization: https://www.youtube.com/watch?v=T58lGKREubo

14 / 73

slide-15
SLIDE 15

Motivation

Lucas and Roberto were very interested in the Kuramoto Model. They were trying to understand the behaviour of neuronal populations under epileptic phenomenon. The thing is that the Kuramoto Model is famous to understand synchronization: https://www.youtube.com/watch?v=T58lGKREubo In the works I’m going to present we go in other direction. We do NOT have any theorems about synchronization here (despite we have some simulations).

15 / 73

slide-16
SLIDE 16

Motivation

Lucas and Roberto were very interested in the Kuramoto Model. They were trying to understand the behaviour of neuronal populations under epileptic phenomenon. The thing is that the Kuramoto Model is famous to understand synchronization: https://www.youtube.com/watch?v=T58lGKREubo In the works I’m going to present we go in other direction. We do NOT have any theorems about synchronization here (despite we have some simulations). Since our spins live on R (and not in S1) we need that t ∈ [0, T]. Our contribution is to remove the mean-field assumption.

16 / 73

slide-17
SLIDE 17

Interacting particles on graphs

5 4 3 2 1 Start with a graph G.

slide-18
SLIDE 18

Interacting particles on graphs

5 4 3 2 1 R θ1(t) Start with a graph G. Think that each particle is on a vertex. It is like a spin in R evolving in time.

slide-19
SLIDE 19

Interacting particles on graphs

5 4 3 2 1 R θ1(t) R θ2(t) Start with a graph G. Think that each particle is on a vertex. It is like a spin in R evolving in time.

slide-20
SLIDE 20

Interacting particles on graphs

5 4 3 2 1 R θ1(t) R θ2(t) R θ3(t) Start with a graph G. Think that each particle is on a vertex. It is like a spin in R evolving in time.

slide-21
SLIDE 21

Interacting particles on graphs

5 4 3 2 1 R θ1(t) R θ2(t) R θ3(t) R θ4(t) Start with a graph G. Think that each particle is on a vertex. It is like a spin in R evolving in time.

slide-22
SLIDE 22

Interacting particles on graphs

5 4 3 2 1 R θ1(t) R θ2(t) R θ3(t) R θ4(t) R θ5(t) Start with a graph G. Think that each particle is on a vertex. It is like a spin in R evolving in time.

22 / 73

slide-23
SLIDE 23

Interacting particles on graphs

5 4 3 2 1 R θ1(t) R θ2(t) R θ3(t) R θ4(t) R θ5(t) Start with a graph G. Think that each particle is on a vertex. It is like a spin in R evolving in time. We want the Kuramoto model with interactions given by G.

23 / 73

slide-24
SLIDE 24

The stochastic Kuramoto model on graphs

The particles follow the following system of SDEs (t ∈ [0, T]): dθi(t) = 1 di

  • j∼G i

sin(θj(t) − θi(t))dt + ωidt + dBi(t), ∀i ∈ G.

24 / 73

slide-25
SLIDE 25

The stochastic Kuramoto model on graphs

The particles follow the following system of SDEs (t ∈ [0, T]): dθi(t) = 1 di

  • j∼G i

sin(θj(t) − θi(t))dt + ωidt + dBi(t), ∀i ∈ G. Describes the evolution of the random paths of each particle.

25 / 73

slide-26
SLIDE 26

The stochastic Kuramoto model on graphs

The particles follow the following system of SDEs (t ∈ [0, T]): dθi(t) = 1 di

  • j∼G i

sin(θj(t) − θi(t))dt + ωidt + dBi(t), ∀i ∈ G. Describes the evolution of the random paths of each particle. Particle i interacts with its neighbours. We normalize by di.

26 / 73

slide-27
SLIDE 27

The stochastic Kuramoto model on graphs

The particles follow the following system of SDEs (t ∈ [0, T]): dθi(t) = 1 di

  • j∼G i

sin(θj(t) − θi(t))dt + ωidt + dBi(t), ∀i ∈ G. Describes the evolution of the random paths of each particle. Particle i interacts with its neighbours. We normalize by di. The same interaction function of the mean-field case.

27 / 73

slide-28
SLIDE 28

The stochastic Kuramoto model on graphs

The particles follow the following system of SDEs (t ∈ [0, T]): dθi(t) = 1 di

  • j∼G i

sin(θj(t) − θi(t))dt + ωidt + dBi(t), ∀i ∈ G. Describes the evolution of the random paths of each particle. Particle i interacts with its neighbours. We normalize by di. The same interaction function of the mean-field case. Natural frequencies.

28 / 73

slide-29
SLIDE 29

The stochastic Kuramoto model on graphs

The particles follow the following system of SDEs (t ∈ [0, T]): dθi(t) = 1 di

  • j∼G i

sin(θj(t) − θi(t))dt + ωidt + dBi(t), ∀i ∈ G. Describes the evolution of the random paths of each particle. Particle i interacts with its neighbours. We normalize by di. The same interaction function of the mean-field case. Natural frequencies. Independent standard Brownian motions.

29 / 73

slide-30
SLIDE 30

The stochastic Kuramoto model on graphs

The particles follow the following system of SDEs (t ∈ [0, T]): dθi(t) = 1 di

  • j∼G i

sin(θj(t) − θi(t))dt + ωidt + dBi(t), ∀i ∈ G. Describes the evolution of the random paths of each particle. Particle i interacts with its neighbours. We normalize by di. The same interaction function of the mean-field case. Natural frequencies. Independent standard Brownian motions. What do we want to prove?

30 / 73

slide-31
SLIDE 31

Law of Large Numbers

31 / 73

slide-32
SLIDE 32

Law of Large Numbers

Take a sequence of graphs (GN)N≥1. Each GN has N ∈ N vertices.

32 / 73

slide-33
SLIDE 33

Law of Large Numbers

Take a sequence of graphs (GN)N≥1. Each GN has N ∈ N vertices. Let (θGN

i

)i∈[N] be the particles defined as before on the graph GN.

33 / 73

slide-34
SLIDE 34

Law of Large Numbers

Take a sequence of graphs (GN)N≥1. Each GN has N ∈ N vertices. Let (θGN

i

)i∈[N] be the particles defined as before on the graph GN. For each N ≥ 1 we define the empirical random measure over trajectories LGN: LGN = 1 N

N

  • i=1

δθ

GN i 34 / 73

slide-35
SLIDE 35

Law of Large Numbers

Take a sequence of graphs (GN)N≥1. Each GN has N ∈ N vertices. Let (θGN

i

)i∈[N] be the particles defined as before on the graph GN. For each N ≥ 1 we define the empirical random measure over trajectories LGN: LGN = 1 N

N

  • i=1

δθ

GN i

GOAL: To prove that LGN converges to something, possibly satisfying LDP. We also want to see how the limit depends on the sequence (GN)N∈N.

35 / 73

slide-36
SLIDE 36

The classical result: Mean-Field

Consider KN, the complete graph with N vertices.

36 / 73

slide-37
SLIDE 37

The classical result: Mean-Field

Consider KN, the complete graph with N vertices. Dai Pra and Den Hollander (Journal of Statistical Physics-96):

37 / 73

slide-38
SLIDE 38

The classical result: Mean-Field

Consider KN, the complete graph with N vertices. Dai Pra and Den Hollander (Journal of Statistical Physics-96): (LKN)N∈N converges to a McKean-Vlasov diffusion (a kind of self interacting process that is important in the literature).

38 / 73

slide-39
SLIDE 39

The classical result: Mean-Field

Consider KN, the complete graph with N vertices. Dai Pra and Den Hollander (Journal of Statistical Physics-96): (LKN)N∈N converges to a McKean-Vlasov diffusion (a kind of self interacting process that is important in the literature). More specifically, (LKN)N≥1 satisfy LDP with rate function I which has the MV diffusion as unique minimizer.

39 / 73

slide-40
SLIDE 40

The classical result: Mean-Field

Consider KN, the complete graph with N vertices. Dai Pra and Den Hollander (Journal of Statistical Physics-96): (LKN)N∈N converges to a McKean-Vlasov diffusion (a kind of self interacting process that is important in the literature). More specifically, (LKN)N≥1 satisfy LDP with rate function I which has the MV diffusion as unique minimizer. Summing up: we have SLLN with exponential rates of convergence and a nice description of the limit object.

40 / 73

slide-41
SLIDE 41

Our contributions

Our Goal: to extend the previous nice result for other graphs.

41 / 73

slide-42
SLIDE 42

Our contributions

Our Goal: to extend the previous nice result for other graphs. Example: GN is an Erd¨

  • s-R´

enyi graph with N vertices and edge probability p(N) ∈ [0, 1]. The mean degree is roughly Np(N).

42 / 73

slide-43
SLIDE 43

Our contributions

Our Goal: to extend the previous nice result for other graphs. Example: GN is an Erd¨

  • s-R´

enyi graph with N vertices and edge probability p(N) ∈ [0, 1]. The mean degree is roughly Np(N). Main achievement: In two different works, Oliveira-R.(Journal of Statistical Physics-19) and Oliveira-Stolerman-R.(Arxiv-18), we proved that

43 / 73

slide-44
SLIDE 44

Our contributions

Our Goal: to extend the previous nice result for other graphs. Example: GN is an Erd¨

  • s-R´

enyi graph with N vertices and edge probability p(N) ∈ [0, 1]. The mean degree is roughly Np(N). Main achievement: In two different works, Oliveira-R.(Journal of Statistical Physics-19) and Oliveira-Stolerman-R.(Arxiv-18), we proved that lim

N→∞ LGN is determined by either lim N→∞ Np(N) = ∞ or

lim

N→∞ Np(N) = c < ∞.

44 / 73

slide-45
SLIDE 45

Our contributions

lim

N→∞ Np(N)

= +∞ = c < +∞ GN “looks like” KN GW (c) tree In the sense AKN and AGn Benjamini-Schramm are “close” very nicely LGN → ?? McKean-Vlasov New object Limit Theorem LDP Hydrodynamic limit (SLLN) Propagation of Chaos Sync? ?? NO (through simulations)

45 / 73

slide-46
SLIDE 46

Previous results

Before giving more details, let’s see some past results.

46 / 73

slide-47
SLIDE 47

Previous results

Before giving more details, let’s see some past results. To obtain a limit:

47 / 73

slide-48
SLIDE 48

Previous results

Before giving more details, let’s see some past results. To obtain a limit: Medvedev [Med18] assumes Np(N) ≫ √ N in the noiseless case.

48 / 73

slide-49
SLIDE 49

Previous results

Before giving more details, let’s see some past results. To obtain a limit: Medvedev [Med18] assumes Np(N) ≫ √ N in the noiseless case. Coppini, Dietert and Giacomin (Arxiv-18) proved the same kind of result assuming Np(N) ≫ log N (with other weaker assumptions)

49 / 73

slide-50
SLIDE 50

Previous results

Before giving more details, let’s see some past results. To obtain a limit: Medvedev [Med18] assumes Np(N) ≫ √ N in the noiseless case. Coppini, Dietert and Giacomin (Arxiv-18) proved the same kind of result assuming Np(N) ≫ log N (with other weaker assumptions) We are NOT assuming any growth condition on the divergence of Np(N).

50 / 73

slide-51
SLIDE 51

Previous results

Before giving more details, let’s see some past results. To obtain a limit: Medvedev [Med18] assumes Np(N) ≫ √ N in the noiseless case. Coppini, Dietert and Giacomin (Arxiv-18) proved the same kind of result assuming Np(N) ≫ log N (with other weaker assumptions) We are NOT assuming any growth condition on the divergence of Np(N). It was the first result for this kind of model when Np(N) → c < ∞. A few months later there is a result of Kavita and others (Arxiv-18).

51 / 73

slide-52
SLIDE 52

Next steps

Now I will try to be more precise about the results.

52 / 73

slide-53
SLIDE 53

Next steps

Now I will try to be more precise about the results. I will also try to give a very rough idea about the proofs.

53 / 73

slide-54
SLIDE 54

Next steps

Now I will try to be more precise about the results. I will also try to give a very rough idea about the proofs. Due time limit we will discuss the case when Np(N) → c < ∞.

54 / 73

slide-55
SLIDE 55

Np(N) → c < ∞

The sequence (GN)N≥1 “locally looks like” the random GW(c) tree (T , o) in the sense that for every reference rooted graph (H, p) lim

N→∞

1 N #

  • 1 ≤ i ≤ N : (GN, i)R = (H, p)R
  • → P ((T , o)R = (H, p)R) .

55 / 73

slide-56
SLIDE 56

Np(N) → c < ∞

The sequence (GN)N≥1 “locally looks like” the random GW(c) tree (T , o) in the sense that for every reference rooted graph (H, p) lim

N→∞

1 N #

  • 1 ≤ i ≤ N : (GN, i)R = (H, p)R
  • → P ((T , o)R = (H, p)R) .

We can guess (heuristic): lim

N→∞

1 N #

  • 1 ≤ i ≤ N : θ(GN,i)R

i

= θ(H,p)R

p

  • → P
  • θ(T ,o)R
  • = θ(H,p)R

p

  • .

56 / 73

slide-57
SLIDE 57

Np(N) → c < ∞

The sequence (GN)N≥1 “locally looks like” the random GW(c) tree (T , o) in the sense that for every reference rooted graph (H, p) lim

N→∞

1 N #

  • 1 ≤ i ≤ N : (GN, i)R = (H, p)R
  • → P ((T , o)R = (H, p)R) .

We can guess (heuristic): lim

N→∞

1 N #

  • 1 ≤ i ≤ N : θ(GN,i)R

i

= θ(H,p)R

p

  • → P
  • θ(T ,o)R
  • = θ(H,p)R

p

  • .

If we have the following (in a “good” way): θ(GN,i)R

i

≍ θ(GN,i)∞

i

= θGN

i

and θ(T ,o)R

  • ≍ θ(T ,o)∞
  • = θT
  • 57 / 73
slide-58
SLIDE 58

Np(N) → c < ∞

The sequence (GN)N≥1 “locally looks like” the random GW(c) tree (T , o) in the sense that for every reference rooted graph (H, p) lim

N→∞

1 N #

  • 1 ≤ i ≤ N : (GN, i)R = (H, p)R
  • → P ((T , o)R = (H, p)R) .

We can guess (heuristic): lim

N→∞

1 N #

  • 1 ≤ i ≤ N : θ(GN,i)R

i

= θ(H,p)R

p

  • → P
  • θ(T ,o)R
  • = θ(H,p)R

p

  • .

If we have the following (in a “good” way): θ(GN,i)R

i

≍ θ(GN,i)∞

i

= θGN

i

and θ(T ,o)R

  • ≍ θ(T ,o)∞
  • = θT
  • Then we can imagine that

1 N

N

  • i=1

δθ

GN i

→ Law of θT

  • .

58 / 73

slide-59
SLIDE 59

Main theorems

New object limit: We can solve for a.e. realization of (T , o) (with uniqueness) the possible infinite system of SDEs obtaining the process

  • θT

v

  • v∈T .

59 / 73

slide-60
SLIDE 60

Main theorems

New object limit: We can solve for a.e. realization of (T , o) (with uniqueness) the possible infinite system of SDEs obtaining the process

  • θT

v

  • v∈T .

SLLN: For any bounded Lipschitz function h and almost surely 1 N

N

  • i=1

h

  • θGN

i

  • → E
  • h(θT
  • )
  • .

60 / 73

slide-61
SLIDE 61

Main theorems

New object limit: We can solve for a.e. realization of (T , o) (with uniqueness) the possible infinite system of SDEs obtaining the process

  • θT

v

  • v∈T .

SLLN: For any bounded Lipschitz function h and almost surely 1 N

N

  • i=1

h

  • θGN

i

  • → E
  • h(θT
  • )
  • .

Propagation of chaos: Let (T1, o1), · · · , (Tk, ok) i.i.d. GW(c) trees. For bounded Lipschitz functions f1, · · · , fk lim

N→∞

1 Nk

N

  • u1,··· ,uk=1

E k

  • i=1

fi(θGN

ui )

k

  • i=1

E

  • fi(θTi
  • i )
  • .

61 / 73

slide-62
SLIDE 62

Main idea of the proof

All the results are based on the fact that we can do the approximations:

62 / 73

slide-63
SLIDE 63

Main idea of the proof

All the results are based on the fact that we can do the approximations: θ(GN,i)R

i

≍ θ(GN,i)∞

i

= θGN

i

and θ(T ,o)R

  • ≍ θ(T ,o)∞
  • = θT
  • .

63 / 73

slide-64
SLIDE 64

Main idea of the proof

All the results are based on the fact that we can do the approximations: θ(GN,i)R

i

≍ θ(GN,i)∞

i

= θGN

i

and θ(T ,o)R

  • ≍ θ(T ,o)∞
  • = θT
  • .

For example, to prove SLLN we create intermediate steps

64 / 73

slide-65
SLIDE 65

Main idea of the proof

All the results are based on the fact that we can do the approximations: θ(GN,i)R

i

≍ θ(GN,i)∞

i

= θGN

i

and θ(T ,o)R

  • ≍ θ(T ,o)∞
  • = θT
  • .

For example, to prove SLLN we create intermediate steps 1 N

N

  • i=1

h

  • θGN

i

  • ≍ 1

N

N

  • i=1

h

  • θ(GN,i)R

i

  • ≍ E
  • h(θ(T ,o)R
  • )
  • ≍ E
  • h(θT
  • )
  • .

65 / 73

slide-66
SLIDE 66

Main idea of the proof

All the results are based on the fact that we can do the approximations: θ(GN,i)R

i

≍ θ(GN,i)∞

i

= θGN

i

and θ(T ,o)R

  • ≍ θ(T ,o)∞
  • = θT
  • .

For example, to prove SLLN we create intermediate steps 1 N

N

  • i=1

h

  • θGN

i

  • ≍ 1

N

N

  • i=1

h

  • θ(GN,i)R

i

  • ≍ E
  • h(θ(T ,o)R
  • )
  • ≍ E
  • h(θT
  • )
  • .

Using these intermediate steps we can use classical concentration inequalities for functions of i.i.d. Brownian motions.

66 / 73

slide-67
SLIDE 67

Main idea of the proof

All the results are based on the fact that we can do the approximations: θ(GN,i)R

i

≍ θ(GN,i)∞

i

= θGN

i

and θ(T ,o)R

  • ≍ θ(T ,o)∞
  • = θT
  • .

For example, to prove SLLN we create intermediate steps 1 N

N

  • i=1

h

  • θGN

i

  • ≍ 1

N

N

  • i=1

h

  • θ(GN,i)R

i

  • ≍ E
  • h(θ(T ,o)R
  • )
  • ≍ E
  • h(θT
  • )
  • .

Using these intermediate steps we can use classical concentration inequalities for functions of i.i.d. Brownian motions. Next I will tell about the “approximations.”

67 / 73

slide-68
SLIDE 68

The “approximations”

Now I will tell you how to “prove” that θ(GN,i)R

i

≍ θ(GN,i)∞

i

68 / 73

slide-69
SLIDE 69

The “approximations”

Now I will tell you how to “prove” that θ(GN,i)R

i

≍ θ(GN,i)∞

i

Using a extended version of Gronwall’s inequality we can show that |θGN

i

(t) − θ(GN,i)R

i

(t)| ≤ PSRW

i

{reaches ∂(GN, i)R) at time t}

69 / 73

slide-70
SLIDE 70

The “approximations”

Now I will tell you how to “prove” that θ(GN,i)R

i

≍ θ(GN,i)∞

i

Using a extended version of Gronwall’s inequality we can show that |θGN

i

(t) − θ(GN,i)R

i

(t)| ≤ PSRW

i

{reaches ∂(GN, i)R) at time t} Using a bound for the SRW on GN we conclude that |θGN

i

(t) − θ(GN,i)R

i

(t)| ≤ |∂(GN, i)R| exp

  • − R

T log R T

  • .

70 / 73

slide-71
SLIDE 71

Thanks!

71 / 73

slide-72
SLIDE 72

References I

[CDG18] F. Coppini, H. Dietert, and G. Giacomin. A Law of Large Numbers and Large Deviations for interacting diffusions

  • n Erd\H{o}s-R\’enyi graphs. ArXiv e-prints, July 2018.

[Med18] G. S. Medvedev. The continuum limit of the Kuramoto model on sparse directed graphs. ArXiv e-prints, February 2018. [ORS18] Roberto I. Oliveira, Guilherme H. Reis, and Lucas M.

  • Stolerman. Interacting diffusions on sparse graphs:

hydrodynamics from local weak limits. arXiv e-prints, page arXiv:1812.11924, Dec 2018.

72 / 73

slide-73
SLIDE 73

References II

[RO18]

  • G. H. Reis and R. I. Oliveira. Interacting diffusions on

random graphs with diverging degrees: hydrodynamics and large deviations. ArXiv e-prints, July 2018.

73 / 73