A closer look at the classical fixed-point analysis
- f WLANs
Rajesh Sundaresan
Indian Institute of Science
A closer look at the classical fixed-point analysis of WLANs Rajesh - - PowerPoint PPT Presentation
A closer look at the classical fixed-point analysis of WLANs Rajesh Sundaresan Indian Institute of Science May 2017 Section 1 The classical fixed point analysis DCF: The 802.11 countdown and its Markovian caricature N nodes accessing the
Indian Institute of Science
◮ N nodes accessing the common medium in a wireless LAN.
◮ Each node’s (backoff) state space: Z = {0, 1, · · · , m − 1}.
◮ Transitions:
1 i i + 1 m - 1
◮ Assume three states, Z = {0, 1, 2} or m = 3. ◮ Attempt probability for node in state i is ci/N. ◮ Aggressiveness of the transmission c = (c0, c1, c2). ◮ The scaling by 1/N ensures that the overall attempt probability of a
◮ Conventional wisdom: exponential backoff:
◮ Observation: your collision probability depends only on the empirical
◮ ξ = current empirical measure of nodes across states. ◮ Number of nodes across states is (Nξ0, Nξ1, . . . , Nξm−1). ◮ If you are in state 0, others states (Nξ0 − 1, Nξ1, . . . , Nξm−1).
m−1
m−1
◮ c, ξ is the attempt probability: i(Nξi)(ci/N). ◮ If N is small or if attempt probabilities don’t scale, avoid the limit.
1 i i + 1 m - 1
N c0 + γ N c1 + γ2 N c2 + . . . =: G(γ)
◮ why decoupling is a good assumption; ◮ when node independent, state independent, conditional collision
◮ and going a little beyond
◮ Coupled dynamics. ◮ Embed slot boundaries on R+. Assume slots of duration 1/N. ◮ Transition rate = prob. of change in a slot / slot duration = O(1). ◮ Transition rate for success or failure depends on the states of the
◮ At time t, node transition rates are as follows:
◮ i i + 1 with rate λi,i+1(µN(t)). ◮ i 0 with rate λi,0(µN(t)). ◮ In general, i j with rate λi,j(µN(t)).
◮ Example:
◮ Write as a matrix of rates: Λ(·) = [ λi,j(ξ) ]i,j∈Z. ◮ For ξ, the empirical measure of a configuration, the rate matrix is
◮ (X (N) n
◮ Study µN(·) instead, also a Markov process
◮ Then try to draw conclusions on the original process.
◮ A Markov process with state space being the set of empirical
◮ This is a measure-valued flow across time. ◮ In the continuous-time version:
N ej − 1 N ei occurs at rate Nξ(i)λi,j(ξ). ◮ For large N, changes are small, O(1/N), at higher rates, O(N).
◮ Fluid limit : µN converges to a deterministic limit given by an ODE.
◮ Recall Λ(·) = [ λi,j(·) ] without diagonal entries. Then
h↓0
◮ The rate of change in the kth component is made up of increase
◮ and decrease
◮ Put these together:
◮ Recall Λ(·) = [ λi,j(·) ] without diagonal entries. Then
h↓0
◮ Anticipate that µN(·) will solve (in the large N limit)
◮ Nonlinear ODE.
p
p
◮ The McKean-Vlasov ODE must be well-posed. ◮ µN(0) p
◮ µN(·) p
◮ Let µ(·) be the solution to the McKean-Vlasov dynamics ◮ Choose a node uniformly at random, and tag it.
◮ µN(·) is the distribution for the state of the tagged node at time t. ◮ As N → ∞, the limiting distribution is then µ(t)
◮ Tag k nodes.
◮ If the interaction is only through µN(t), and this converges to a
◮ Each of the k nodes is then executing a time-dependent Markov
◮ Asymptotically, no interaction ... decoupling. ◮ The node trajectories are (asymptotically) independent and
◮ Solve for the rest point of the dynamical system.
◮ If the solution is unique, predict that the system will settle down at
◮ Works very well for the exponential backoff. ◮ But not in general due to limit cycles.
◮ The fixed point is unique, but unstable. ◮ All trajectories starting from outside the fixed point, and all
◮ Large time behaviour for a finite N system: limt→∞ µN(t).
N→∞
t→∞ µN(t)
◮ But we are trying to predict where the system will settle from the
t→∞
N→∞ µN(t)
t→∞ µ(t). ◮ We need a little bit of robustness of the ODE for this work.
d
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Fraction in state 0 Fraction in state 1
◮ Different parameters: c = (0.5, 0.3, 8.0). ◮ There are two stable equilibria.
1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Slot index Fraction in state 0
◮ If unique globally asymptotically stable equilibrium ξf , then
d
◮ If we encounter multiple stable limit sets, look at probability of a
◮ Characterise the exponent in
◮ The locations {q : V (q) = 0} should “select” the correct limit set. ◮ V (q) is called a quasipotential (Freidlin-Wentzell).
0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.2 0.4 0.6 0.8 1 0.5 1 1.5 2 2.5
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.5 1 1.5 2 2.5
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.5 1 0.2 0.4 0.6 0.8 1 1.2 1.4
◮ At each time t, if the current state is φ(t), the natural tendency is
◮ To follow φ(t) however, the system needs to work against the
◮ L(φ(t), ˙
◮ Write ˙
◮ By decoupling, each node’s state is iid φ(t). ◮ Natural tendency for the Nφ(t)(i) nodes in state i is to have i j
◮ But to move along φ(t) they must have an instantaneous rate of
◮ The Nφ(t)(i) Bernoulli(p = λi,j(t) dt) random variables must have
◮ Sum over i and j and integrate over [0, T] to get the action
◮ Any deviation that puts the system at q must have started its effort
◮ V (ξf ) = 0.
0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.5 1 0.2 0.4 0.6 0.8 1 1.2 1.4
◮ V12 = cost of moving from ξf 1 to ξf 2. ◮ V21 = cost of the reverse move. ◮ If V12 > V21, then v1 = 0 and v2 = V12 − V21.
◮ Start from the global minimum ξf 1 and move to the attractor in the
◮ Then move to q along the least cost path.
◮ Finite time horizons again, but this time to study large deviation
◮ Large deviation of the stationary measure when there is a globally
◮ Analysis of the Markov chain of equilibrium neighbourhoods at
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.5 1 0.2 0.4 0.6 0.8 1 1.2 1.4
◮ Today’s talk is a synthesis of ideas from:
◮ Thanks to many people for discussions and collaborations. In