Metropolis Markov chains for wireless random-access networks - - PowerPoint PPT Presentation

metropolis markov chains for wireless random access
SMART_READER_LITE
LIVE PREVIEW

Metropolis Markov chains for wireless random-access networks - - PowerPoint PPT Presentation

Metropolis Markov chains for wireless random-access networks Alessandro Zocca joint work with Sem C. Borst, Johan S. H. van Leeuwaarden, Francesca R. Nardi Berlin, October 24th, 2014 1 Conflict graph N sites 2 Conflict graph Hard-core


slide-1
SLIDE 1

Metropolis Markov chains for wireless random-access networks

Alessandro Zocca

joint work with Sem C. Borst, Johan S. H. van Leeuwaarden, Francesca R. Nardi

Berlin, October 24th, 2014

1

slide-2
SLIDE 2

Conflict graph

N sites

2

slide-3
SLIDE 3

Conflict graph

Hard-core constraints → edges N

2

slide-4
SLIDE 4

Conflict graph

Hard-core constraints → edges N

2

slide-5
SLIDE 5

Conflict graph

Hard-core constraints → edges N

2

slide-6
SLIDE 6

Conflict graph

Undirected graph G = (V , E) with N vertices

2

slide-7
SLIDE 7

Conflict graph

Presence of a particle → occupied site N

2

slide-8
SLIDE 8

Admissible configurations

Any admissible configuration can be represented by a vector x = (x1, . . . , xN) ∈ {0, 1}N, where xi = 1 ⇐ ⇒ site i is occupied.

3

slide-9
SLIDE 9

Admissible configurations

Any admissible configuration can be represented by a vector x = (x1, . . . , xN) ∈ {0, 1}N, where xi = 1 ⇐ ⇒ site i is occupied. The set of admissible configurations is Ω(G) = {x ∈ {0, 1}N : xi + xj ≤ 1 ∀ (i, j) ∈ E}

3

slide-10
SLIDE 10

Admissible configurations

Any admissible configuration can be represented by a vector x = (x1, . . . , xN) ∈ {0, 1}N, where xi = 1 ⇐ ⇒ site i is occupied. The set of admissible configurations is Ω(G) = {x ∈ {0, 1}N : xi + xj ≤ 1 ∀ (i, j) ∈ E} Ω(G) ≡ I(G) = {I ⊂ V : I independent set}

3

slide-11
SLIDE 11

Conflict graph for wireless networks

N devices/users

4

slide-12
SLIDE 12

Conflict graph for wireless networks

Edges → interference constraints N

4

slide-13
SLIDE 13

Conflict graph for wireless networks

Edges → interference constraints N

4

slide-14
SLIDE 14

Conflict graph for wireless networks

G = (V , E) → wireless network contention graph

4

slide-15
SLIDE 15

Conflict graph for wireless networks

Occupied site → transmitting nodes N

4

slide-16
SLIDE 16

Random-access wireless networks

Randomized medium-access algorithms provide popular mechanism for distributed medium-access control with low implementation complexity

5

slide-17
SLIDE 17

Random-access wireless networks

Randomized medium-access algorithms provide popular mechanism for distributed medium-access control with low implementation complexity Carrier-Sense Multiple-Access (CSMA)

5

slide-18
SLIDE 18

Random-access wireless networks

Randomized medium-access algorithms provide popular mechanism for distributed medium-access control with low implementation complexity Carrier-Sense Multiple-Access (CSMA) Each node obeys random back-off time before starting transmission

  • back-off timer frozen while medium is sensed busy
  • new back-off timer is set after each transmission

5

slide-19
SLIDE 19

Random-access wireless networks

Randomized medium-access algorithms provide popular mechanism for distributed medium-access control with low implementation complexity Carrier-Sense Multiple-Access (CSMA) Each node obeys random back-off time before starting transmission

  • back-off timer frozen while medium is sensed busy
  • new back-off timer is set after each transmission

Main goal

Study the effect of the network topology on its delay performance

5

slide-20
SLIDE 20

Local dynamics

Back-off period is “frozen” if the node is blocked Exp(1)

6

slide-21
SLIDE 21

Local dynamics

Back-off period is “frozen” if the node is blocked Exp(1)

6

slide-22
SLIDE 22

Local dynamics

Back-off periods at each node are i.i.d. Exp(ν)

6

slide-23
SLIDE 23

Local dynamics

Back-off periods at each node are i.i.d. Exp(ν)

6

slide-24
SLIDE 24

Local dynamics

Packet transmission times at each node are i.i.d. Exp(1)

6

slide-25
SLIDE 25

Local dynamics

Packet transmission times at each node are i.i.d. Exp(1)

6

slide-26
SLIDE 26

Markov process

Let Xt ∈ Ω represent the network activity configuration at time t.

7

slide-27
SLIDE 27

Markov process

Let Xt ∈ Ω represent the network activity configuration at time t. (Xt)t≥0 is a reversible continuous-time Markov process on Ω with transition rates q(x, y) =    ν if y = x + ei ∈ Ω, 1 if y = x − ei ∈ Ω,

  • therwise.

7

slide-28
SLIDE 28

Markov process

Let Xt ∈ Ω represent the network activity configuration at time t. (Xt)t≥0 is a reversible continuous-time Markov process on Ω with transition rates q(x, y) =    ν if y = x + ei ∈ Ω, 1 if y = x − ei ∈ Ω,

  • therwise.

The Markov process (Xt)t≥0 is irreducible and positive recurrent on Ω

7

slide-29
SLIDE 29

Markov process

Let Xt ∈ Ω represent the network activity configuration at time t. (Xt)t≥0 is a reversible continuous-time Markov process on Ω with transition rates q(x, y) =    ν if y = x + ei ∈ Ω, 1 if y = x − ei ∈ Ω,

  • therwise.

The Markov process (Xt)t≥0 is irreducible and positive recurrent on Ω Product-form stationary distribution π(x) = Z −1ν

N

i=1 xi,

x ∈ Ω

7

slide-30
SLIDE 30

Long-term vs. short-term fairness

Product-form stationary distribution of (Xt)t≥0 π(x) = Z −1ν

N

i=1 xi,

x ∈ Ω

8

slide-31
SLIDE 31

Long-term vs. short-term fairness

Product-form stationary distribution of (Xt)t≥0 π(x) = Z −1ν

N

i=1 xi,

x ∈ Ω As ν grows large, the configurations with most active nodes attract most

  • f the probability mass

8

slide-32
SLIDE 32

Long-term vs. short-term fairness

Product-form stationary distribution of (Xt)t≥0 π(x) = Z −1ν

N

i=1 xi,

x ∈ Ω As ν grows large, the configurations with most active nodes attract most

  • f the probability mass

However these maximum-size transmission patterns are “rigid”

8

slide-33
SLIDE 33

Long-term vs. short-term fairness

Product-form stationary distribution of (Xt)t≥0 π(x) = Z −1ν

N

i=1 xi,

x ∈ Ω As ν grows large, the configurations with most active nodes attract most

  • f the probability mass

However these maximum-size transmission patterns are “rigid”

Regime of interest: ν → ∞

8

slide-34
SLIDE 34

Transition times and mixing times

Dominant configurations = maximum/maximal independent sets of G

9

slide-35
SLIDE 35

Transition times and mixing times

Dominant configurations = maximum/maximal independent sets of G We study the asymptotic behavior as ν → ∞ of

9

slide-36
SLIDE 36

Transition times and mixing times

Dominant configurations = maximum/maximal independent sets of G We study the asymptotic behavior as ν → ∞ of

  • Transition time T(ν) between dominant configurations as ν → ∞

9

slide-37
SLIDE 37

Transition times and mixing times

Dominant configurations = maximum/maximal independent sets of G We study the asymptotic behavior as ν → ∞ of

  • Transition time T(ν) between dominant configurations as ν → ∞
  • Mixing time tmix(ε, ν) of the activity process (Xt)t≥0 as ν → ∞

9

slide-38
SLIDE 38

Complete K-partite networks [ZBvL12]

G = K4,3,5,2,6

10

slide-39
SLIDE 39

Complete K-partite networks [ZBvL12]

K = 5, (L1, L2, L3, L4, L5) = (4, 3, 5, 2, 6)

10

slide-40
SLIDE 40

Complete K-partite networks [ZBvL12]

State space Ω

10

slide-41
SLIDE 41

Complete K-partite networks [ZBvL12]

State space aggregation: Ω (left) and Ω∗ (right)

10

slide-42
SLIDE 42

Complete K-partite networks [ZBvL12]

Aggregated state space Ω∗

Ω∗ = {0} ∪ {(k, l) ∈ N2 : 1 ≤ k ≤ K, 1 ≤ l ≤ Lk},

10

slide-43
SLIDE 43

State space aggregation

The state aggregation yields a new Markov process (X ∗

t )t≥0 on

Ω∗ = {0} ∪ {(k, l) ∈ N2 : 1 ≤ k ≤ K, 1 ≤ l ≤ Lk},

11

slide-44
SLIDE 44

State space aggregation

The state aggregation yields a new Markov process (X ∗

t )t≥0 on

Ω∗ = {0} ∪ {(k, l) ∈ N2 : 1 ≤ k ≤ K, 1 ≤ l ≤ Lk}, with transition rates q ((k, l), (k, l + 1)) = (Lk − l)ν, for 1 ≤ l < Lk q (0, (k, 1)) = Lkν, q ((k, l), (k, l − 1)) = l, for 1 < l ≤ Lk q ((k, 1), 0) = 1,

11

slide-45
SLIDE 45

State space aggregation

The state aggregation yields a new Markov process (X ∗

t )t≥0 on

Ω∗ = {0} ∪ {(k, l) ∈ N2 : 1 ≤ k ≤ K, 1 ≤ l ≤ Lk}, with transition rates q ((k, l), (k, l + 1)) = (Lk − l)ν, for 1 ≤ l < Lk q (0, (k, 1)) = Lkν, q ((k, l), (k, l − 1)) = l, for 1 < l ≤ Lk q ((k, 1), 0) = 1, and stationary distribution π(k,l)(ν) = π0(ν) Lk l

  • νl,

l = 1, . . . , Lk, k = 1, . . . , K, π0(ν) =

  • 1 +

K

  • k=1

Lk

  • l=1

Lk l

  • νl

−1

11

slide-46
SLIDE 46

Dominant configurations

The maximal independent sets of G corresponds to the configurations si = (ki, Li) ∈ Ω∗, i = 1, . . . , K

12

slide-47
SLIDE 47

Dominant configurations

The maximal independent sets of G corresponds to the configurations si = (ki, Li) ∈ Ω∗, i = 1, . . . , K si is also maximum independent set if Li = maxk Lk

12

slide-48
SLIDE 48

Asymptotic escape time T(k,l)→0 from branch k

Proposition (Asymptotics for the escape time)

For any 0 < l ≤ Lk, ET(k,l)→0(ν) ∼ 1 Lk νLk−1, as ν → ∞, and T(k,l)→0(ν) ET(k,l)→0(ν)

d

− → Exp(1), as ν → ∞.

13

slide-49
SLIDE 49

Asymptotic escape time T(k,l)→0 from branch k

Proposition (Asymptotics for the escape time)

For any 0 < l ≤ Lk, ET(k,l)→0(ν) ∼ 1 Lk νLk−1, as ν → ∞, and T(k,l)→0(ν) ET(k,l)→0(ν)

d

− → Exp(1), as ν → ∞. Remark: no dependence on the initial number l of active nodes!

13

slide-50
SLIDE 50

Stochastic representation for the transition time Ts1→s2

Ts1→s2

d

= Ts1→0 +

  • k=2

Nk

  • i=1
  • ˆ

T (i)

0→(k,1) + T (i) (k,1)→0

  • + ˆ

T0→s2. where Nk

d

= Geo

  • L2

L2 + Lk

  • 14
slide-51
SLIDE 51

Stochastic representation for the transition time Ts1→s2

Ts1→s2

d

= Ts1→0 +

  • k=2

Nk

  • i=1
  • ˆ

T (i)

0→(k,1) + T (i) (k,1)→0

  • + ˆ

T0→s2. where Nk

d

= Geo

  • L2

L2 + Lk

  • 14
slide-52
SLIDE 52

Stochastic representation for the transition time Ts1→s2

Ts1→s2 ≈ Ts1→0 +

  • k=2

Nk

  • i=1
  • ˆ

T (i)

0→(k,1) + T (i) (k,1)→0

  • + ˆ

T0→s2. where Nk

d

= Geo

  • L2

L2 + Lk

  • 14
slide-53
SLIDE 53

Stochastic representation for the transition time Ts1→s2

Ts1→s2 ≈ Ts1→0 +

  • k=2

Nk

  • i=1
  • ˆ

T (i)

0→(k,1) + T (i) (k,1)→0

  • + ˆ

T0→s2. where Nk

d

= Geo

  • L2

L2 + Lk

  • 14
slide-54
SLIDE 54

Stochastic representation for the transition time Ts1→s2

Ts1→s2 ≈ Ts1→0 +

  • k=2

Nk

  • i=1
  • ˆ

T (i)

0→(k,1) + T (i) (k,1)→0

  • + ˆ

T0→s2. where Nk

d

= Geo

  • L2

L2 + Lk

  • Define L∗ := max

k=2 Lk and K∗ = {k = 2 : Lk = L∗}

14

slide-55
SLIDE 55

Stochastic representation for the transition time Ts1→s2

Ts1→s2 ≈ Ts1→0 +

  • k∈K∗

Nk

  • i=1
  • ˆ

T (i)

0→(k,1) + T (i) (k,1)→0

  • + ˆ

T0→s2. where Nk

d

= Geo

  • L2

L2 + Lk

  • Define L∗ := max

k=2 Lk and K∗ = {k = 2 : Lk = L∗}

14

slide-56
SLIDE 56

Stochastic representation for the transition time Ts1→s2

Ts1→s2 ≈ Ts1→0 +

M

  • i=1
  • ˆ

T (i)

0→(k,1) + T (i) (k,1)→0

  • + ˆ

T0→s2. where Nk

d

= Geo

  • L2

L2 + Lk

  • .

Define L∗ := max

k=2 Lk and K∗ = {k = 2 : Lk = L∗}

M d = Geo

  • |K∗|L∗

|K∗|L∗ + L2

  • .

14

slide-57
SLIDE 57

Asymptotics for the transition time Ts1→s2

Theorem

ETs1→s2(ν) ∼ 1{s1∈K∗} L∗ + |K∗| L2

  • νL∗−1,

as ν → ∞, and

15

slide-58
SLIDE 58

Asymptotics for the transition time Ts1→s2

Theorem

ETs1→s2(ν) ∼ 1{s1∈K∗} L∗ + |K∗| L2

  • νL∗−1,

as ν → ∞, and Ts1→s2(ν) ETs1→s2(ν)

d

− → 1 EM

M

  • i=1

Yi, as ν → ∞, where M d = Geo(p∗) + 1{s1∈K∗}, p∗ =

|K∗|L∗ |K∗|L∗+L2 , and Yi are i.i.d. Exp(1).

15

slide-59
SLIDE 59

Asymptotics for the transition time Ts1→s2

Theorem

ETs1→s2(ν) ∼ 1{s1∈K∗} L∗ + |K∗| L2

  • νL∗−1,

as ν → ∞, and Ts1→s2(ν) ETs1→s2(ν)

d

− → 1 EM

M

  • i=1

Yi, as ν → ∞, where M d = Geo(p∗) + 1{s1∈K∗}, p∗ =

|K∗|L∗ |K∗|L∗+L2 , and Yi are i.i.d. Exp(1).

In particular, s1 ∈ K∗ = ⇒ Ts1→s2(ν) ETs1→s2(ν)

d

− → Exp(1), as ν → ∞.

15

slide-60
SLIDE 60

Mixing time

16

slide-61
SLIDE 61

Mixing time

Relabel the components such that L1 ≥ L2 ≥ · · · ≥ LK.

16

slide-62
SLIDE 62

Mixing time

Relabel the components such that L1 ≥ L2 ≥ · · · ≥ LK.

Theorem

tmix(ε, ν) = Θ(νL2−1), ν → ∞.

16

slide-63
SLIDE 63

Mixing time

Relabel the components such that L1 ≥ L2 ≥ · · · ≥ LK.

Theorem

tmix(ε, ν) = Θ(νL2−1), ν → ∞. Idea of the proof:

16

slide-64
SLIDE 64

Mixing time

Relabel the components such that L1 ≥ L2 ≥ · · · ≥ LK.

Theorem

tmix(ε, ν) = Θ(νL2−1), ν → ∞. Idea of the proof:

  • Upper bound by coupling argument

16

slide-65
SLIDE 65

Mixing time

Relabel the components such that L1 ≥ L2 ≥ · · · ≥ LK.

Theorem

tmix(ε, ν) = Θ(νL2−1), ν → ∞. Idea of the proof:

  • Upper bound by coupling argument
  • Lower bound by giving an upper bound for the conductance

(bottleneck ratio) Φ = min

S⊂Ω:π(S)≤1/2 Q(S, Sc)/π(S)

16

slide-66
SLIDE 66

Heterogeneous complete K-partite graph [ZBvL14]

Heterogeneity: users in component k have a back-off rate fk(ν).

17

slide-67
SLIDE 67

Heterogeneous complete K-partite graph [ZBvL14]

Heterogeneity: users in component k have a back-off rate fk(ν).

Asymptotics for the transition time Ts1→s2 (part I)

ETs1→s2(ν) ∼ 1 Ls1 fs1(ν)Ls1−1 + 1 Ls2fs2(ν)

  • k∈K∗

fk(ν)Lk, as ν → ∞.

17

slide-68
SLIDE 68

Heterogeneous complete K-partite graph [ZBvL14]

Heterogeneity: users in component k have a back-off rate fk(ν).

Asymptotics for the transition time Ts1→s2 (part I)

ETs1→s2(ν) ∼ 1 Ls1 fs1(ν)Ls1−1 + 1 Ls2fs2(ν)

  • k∈K∗

fk(ν)Lk, as ν → ∞. where γk := limν→∞

fk(ν)Lk

  • j=s2 fj(ν)Lj and K∗ := {k = 2 : γk > 0}.

17

slide-69
SLIDE 69

Heterogeneous complete K-partite graph [ZBvL14]

Heterogeneity: users in component k have a back-off rate fk(ν).

Asymptotics for the transition time Ts1→s2 (part II)

Ts1→s2(ν) ETs1→s2(ν)

d

− → Z = αY + (1 − α)W , as ν → ∞, where Y d = Exp(1) and LW (s) =

  • 1 +
  • k∈A

γks 1 + γks/βk + s

  • k∈S

γk −1 .

18

slide-70
SLIDE 70

Heterogeneous complete K-partite graph [ZBvL14]

Heterogeneity: users in component k have a back-off rate fk(ν).

Asymptotics for the transition time Ts1→s2 (part II)

Ts1→s2(ν) ETs1→s2(ν)

d

− → Z = αY + (1 − α)W , as ν → ∞, where Y d = Exp(1) and LW (s) =

  • 1 +
  • k∈A

γks 1 + γks/βk + s

  • k∈S

γk −1 . where βk := limν→∞

Lkfk(ν) Ls2fs2(ν) and K∗ is partitioned into three subsets

  • k ∈ N if βk = 0,
  • k ∈ A if βk ∈ R+,
  • k ∈ S if βk = ∞.

18

slide-71
SLIDE 71

Heterogeneous complete K-partite graph [ZBvL14]

Possible asymptotic distribution for Z

α A S Limiting distribution Z ∅ ∅ δ0 (trivial r.v. identical to 0) n.e. ∅ G

i=1 Hi( β1 βA , . . . , βm βA , β1 γ1 , . . . , βm γm )

G

i=1 Expi(λ)

if βk/γk = λ ∀ k ∈ A ∅ n.e. Exp(1/γS) n.e. n.e. W (0, 1) ∅ ∅ Exp(1/α) Exp(1/α) + G

i=1 Hi

  • β1

βA , . . . , βm βA , β1 (1−α)γ1 , . . . , βm (1−α)γm

  • n.e.

∅ Exp(1/α) + G

i=1 Expi

  • λ/(1 − α)
  • if βk/γk = λ

∀ k ∈ A Exp

  • α−1(1 + m

k=1 βk)−1

if βk/γk = (1 − α)/α ∀ k ∈ A Exp(1) if βk/γk = (1 − α)/α = m

i=1 βk

∀ k ∈ A ∅ n.e. Exp(1/α) + Exp(1/(1 − α)γS) Erlang(2, 1/α) if α = γS/(1 + γS) n.e. n.e. Exp(1/α) + (1 − α)W 1

  • Exp(1)

19

slide-72
SLIDE 72

Grid graphs [ZBvLN13]

20

slide-73
SLIDE 73

Grid graphs [ZBvLN13]

“even chessboard” E

20

slide-74
SLIDE 74

Grid graphs [ZBvLN13]

“odd chessboard” O

20

slide-75
SLIDE 75

Boundary conditions

2K × 2L open grid networks

21

slide-76
SLIDE 76

Boundary conditions

2K × 2L toric grid networks

21

slide-77
SLIDE 77

Boundary conditions

2K × 2L cylindrical grid networks

21

slide-78
SLIDE 78

Transition and mixing times results

Let G be a grid graph

22

slide-79
SLIDE 79

Transition and mixing times results

Let G be a grid graph

Theorem (Mean transition time)

ETE→O(ν) νΓ(G)−1, as ν → ∞.

22

slide-80
SLIDE 80

Transition and mixing times results

Let G be a grid graph

Theorem (Mean transition time)

ETE→O(ν) νΓ(G)−1, as ν → ∞. lim inf

ν→∞

log ETE→O(ν) log ν ≥ Γ(G) − 1

22

slide-81
SLIDE 81

Transition and mixing times results

Let G be a grid graph

Theorem (Mean transition time)

ETE→O(ν) νΓ(G)−1, as ν → ∞. lim inf

ν→∞

log ETE→O(ν) log ν ≥ Γ(G) − 1

Theorem (Mixing time)

tmix(ε, ν) ≥ CενΓ(G)−1, as ν → ∞.

22

slide-82
SLIDE 82

Analysis of the state space Ω(G)

Key ingredient = analysis of the “bottlenecks” of the state space Ω

23

slide-83
SLIDE 83

Analysis of the state space Ω(G)

Key ingredient = analysis of the “bottlenecks” of the state space Ω Intuition During the transition from E to O the process has to visit “mixed configurations” with fewer active nodes

23

slide-84
SLIDE 84

Analysis of the state space Ω(G)

Key ingredient = analysis of the “bottlenecks” of the state space Ω Intuition During the transition from E to O the process has to visit “mixed configurations” with fewer active nodes

23

slide-85
SLIDE 85

Analysis of the state space Ω(G)

Key ingredient = analysis of the “bottlenecks” of the state space Ω Intuition During the transition from E to O the process has to visit “mixed configurations” with fewer active nodes Bottleneck = subset of low-probability configurations in the middle to go from E to O

23

slide-86
SLIDE 86

Where is the bottleneck?

24

slide-87
SLIDE 87

Where is the bottleneck?

State space Ω for the complete bipartite grid

24

slide-88
SLIDE 88

Where is the bottleneck?

25

slide-89
SLIDE 89

Where is the bottleneck?

State space Ω for the 4 × 4 toric grid

25

slide-90
SLIDE 90

Bottleneck analysis

In general, the bottleneck is a large and intricate subset of Ω(G), which consists of “mixed configurations”

26

slide-91
SLIDE 91

Bottleneck analysis

In general, the bottleneck is a large and intricate subset of Ω(G), which consists of “mixed configurations” ∆(x) := N 2 −

N

  • i=1

xi

26

slide-92
SLIDE 92

Bottleneck analysis

In general, the bottleneck is a large and intricate subset of Ω(G), which consists of “mixed configurations” ∆(x) := N 2 −

N

  • i=1

xi “inefficiency” of x ∈ Ω(G)

26

slide-93
SLIDE 93

Bottleneck analysis

In general, the bottleneck is a large and intricate subset of Ω(G), which consists of “mixed configurations” ∆(x) := N 2 −

N

  • i=1

xi “inefficiency” of x ∈ Ω(G) To understand how the transition E → O occurs, the key quantity is Γ(G) := min

ω:E→O max x∈ω ∆(x)

efficiency gap

26

slide-94
SLIDE 94

Metropolis Markov chains

Setting ν = eβ, the stationary distribution can be rewritten πβ(x) = e−βH(x)

  • y∈Ω e−βH(y) ,

x ∈ Ω, where H : Ω → R is defined as H(x) := − N

i=1 xi for every x ∈ Ω.

27

slide-95
SLIDE 95

Metropolis Markov chains

Setting ν = eβ, the stationary distribution can be rewritten πβ(x) = e−βH(x)

  • y∈Ω e−βH(y) ,

x ∈ Ω, where H : Ω → R is defined as H(x) := − N

i=1 xi for every x ∈ Ω.

After uniformization, the resulting Markov chain has Metropolis transition probabilities Pβ(x, y) =

  • q(x, y)e−β[H(y)−H(x)]+,

if x = y, 1 −

z=y Pβ(x, z),

if x = y.

27

slide-96
SLIDE 96

Efficiency gap for grid networks

We can rewrite ∆(x) = H(x) − min

y∈Ω H(y) ≥ 0

28

slide-97
SLIDE 97

Efficiency gap for grid networks

We can rewrite ∆(x) = H(x) − min

y∈Ω H(y) ≥ 0

and Γ(G) = min

ω:E→O max x∈ω ∆(x),

is the minimum energy barrier to overcome to go from E to O.

28

slide-98
SLIDE 98

Efficiency gap for grid networks

We can rewrite ∆(x) = H(x) − min

y∈Ω H(y) ≥ 0

and Γ(G) = min

ω:E→O max x∈ω ∆(x),

is the minimum energy barrier to overcome to go from E to O.

Theorem

Γ(G) =      min{2K, 2L} + 1 if G is a 2K × 2L toric grid, min{K, L} + 1 if G is a 2K × 2L open grid, min{2K, L} + 1 if G is a 2K × 2L cylindrical grid.

28

slide-99
SLIDE 99

Proof idea 1

29

slide-100
SLIDE 100

Proof idea 1

Lemma (Inefficiency on a band)

∆B(x) = 0 ⇐ ⇒ x|B = E|B or x|B = O|B

29

slide-101
SLIDE 101

Proof idea 1

Lemma (Inefficiency on a band)

∆B(x) = 0 ⇐ ⇒ x|B = E|B or x|B = O|B

Lemma (Inefficiency on rows)

∆r(x) = 0 ⇐ ⇒ x|r = E|r or x|r = O|r

29

slide-102
SLIDE 102

Proof idea 2 (Lower bound for Γ(G))

30

slide-103
SLIDE 103

Proof idea 2 (Lower bound for Γ(G))

30

slide-104
SLIDE 104

Proof idea 2 (Lower bound for Γ(G))

30

slide-105
SLIDE 105

Recent results

31

slide-106
SLIDE 106

Recent results

Theorem (Mean transition time)

lim

ν→∞

log ETE→O(ν) log ν = Γ(G) − 1, as ν → ∞

31

slide-107
SLIDE 107

Recent results

Theorem (Mean transition time)

lim

ν→∞

log ETE→O(ν) log ν = Γ(G) − 1, as ν → ∞

Theorem (Asymptotic exponentiality)

TE→O(ν) ETE→O(ν)

d

− → Exp(1), as ν → ∞

31

slide-108
SLIDE 108

Packet dynamics: non-saturated buffers

  • Saturated buffers
  • Non-saturated buffers

32

slide-109
SLIDE 109

Packet dynamics: non-saturated buffers

  • Saturated buffers
  • Non-saturated buffers

32

slide-110
SLIDE 110

Packet queue dynamics

Packets arrive at various nodes according to Poisson processes with rate λ

33

slide-111
SLIDE 111

Packet queue dynamics

Packets arrive at various nodes according to Poisson processes with rate λ Each node is active a fraction of time ↑ 1/2 as ν → ∞

33

slide-112
SLIDE 112

Packet queue dynamics

Packets arrive at various nodes according to Poisson processes with rate λ Each node is active a fraction of time ↑ 1/2 as ν → ∞

Normalized load

ρ = 2λ

33

slide-113
SLIDE 113

Packet queue dynamics

Packets arrive at various nodes according to Poisson processes with rate λ Each node is active a fraction of time ↑ 1/2 as ν → ∞

Normalized load

ρ = 2λ

Proposition (Stability)

In order for all queues to be stable, it is required that ν > ρ 2(1 − ρ)

33

slide-114
SLIDE 114

A vicious circle

  • High load ρ requires a high activation rate ν

34

slide-115
SLIDE 115

A vicious circle

  • High load ρ requires a high activation rate ν
  • High activation rate ν implies extremely slow transitions between the

two dominant configurations E and O

34

slide-116
SLIDE 116

A vicious circle

  • High load ρ requires a high activation rate ν
  • High activation rate ν implies extremely slow transitions between the

two dominant configurations E and O

  • Slow transitions cause starvation and hence excessively long queues

and delays

34

slide-117
SLIDE 117

A vicious circle

  • High load ρ requires a high activation rate ν
  • High activation rate ν implies extremely slow transitions between the

two dominant configurations E and O

  • Slow transitions cause starvation and hence excessively long queues

and delays

Theorem (Average delay scaling)

EW (ρ)

  • 1

1 − ρ Γ(G)−1 as ρ ↑ 1

34

slide-118
SLIDE 118

Transition time and delay

Theorem (Long-term average delay)

lim inf

ν→∞

EW (ν) ETE→O(ν) ≥ 1 4 − 2ρ

35

slide-119
SLIDE 119

Transition time and delay

Theorem (Long-term average delay)

lim inf

ν→∞

EW (ν) ETE→O(ν) ≥ 1 4 − 2ρ Sketch of the proof

35

slide-120
SLIDE 120

A simulation picture

6 × 6 toric grid with load ρ = 0.8

36

slide-121
SLIDE 121

Future research

  • Multiple antennas and multiple frequencies

37

slide-122
SLIDE 122

Future research

  • Multiple antennas and multiple frequencies
  • Effect of small perturbations of the conflict graph

37

slide-123
SLIDE 123

Future research

  • Multiple antennas and multiple frequencies
  • Effect of small perturbations of the conflict graph
  • Analysis of other regular conflict graphs (e.g. triangular and

hexagonal lattices)

37

slide-124
SLIDE 124

Future research

  • Multiple antennas and multiple frequencies
  • Effect of small perturbations of the conflict graph
  • Analysis of other regular conflict graphs (e.g. triangular and

hexagonal lattices)

Thank you for your attention

37

slide-125
SLIDE 125

Future research

  • Multiple antennas and multiple frequencies
  • Effect of small perturbations of the conflict graph
  • Analysis of other regular conflict graphs (e.g. triangular and

hexagonal lattices)

Thank you for your attention

[ZBvL12] A. Zocca, S.C. Borst, J.S.H. van Leeuwaarden, Mixing properties of CSMA networks on partite graphs, Valuetools 2012 [ZBvLN13] A. Zocca, S.C. Borst, J.S.H. van Leeuwaarden, F.R Nardi, Delay performance in random-access grid networks. Performance Evaluation, 70(10), 2013 [ZBvL14] A. Zocca, S.C. Borst, J.S.H. van Leeuwaarden, Slow transitions, slow mixing and starvation in dense random-access networks, arXiv:1403.3325

37