HIERATIC Hierarchical Analysis of Complex Dynamical Systems WP6 - - PowerPoint PPT Presentation

hieratic hierarchical analysis of complex dynamical
SMART_READER_LITE
LIVE PREVIEW

HIERATIC Hierarchical Analysis of Complex Dynamical Systems WP6 - - PowerPoint PPT Presentation

HIERATIC Hierarchical Analysis of Complex Dynamical Systems WP6 Formal theoretical results concerning network dynamics John Haslegrave, Chris Cannings The University of Sheffield December 11, 2014 Overview 1 Tournament process 2 Cyclic


slide-1
SLIDE 1

HIERATIC – Hierarchical Analysis of Complex Dynamical Systems

WP6 – Formal theoretical results concerning network dynamics John Haslegrave, Chris Cannings The University of Sheffield December 11, 2014

slide-2
SLIDE 2

Overview

1 Tournament process 2 Cyclic relationships in Moran process

Rock-paper-scissors Symmetric generalisations Coarse-graining random generalisations

3 Preferential attachment with choice

slide-3
SLIDE 3

Tournament process

We consider a random process on a connected graph G with n

  • vertices. Each vertex has a strategy taken from {1, . . . , m} at
  • random. At each time step an edge is chosen; both vertices take
  • n the lower of the two strategies with probability p and the higher

with probability 1 − p. Since G is connected, eventually we will reach a state where only

  • ne strategy remains. Write S for the strategy that is left; then

P(S l) can be computed by coarse-graining the strategies into those at most l and those exceeding l.

slide-4
SLIDE 4

Probability of survival

Write a0 for the number of vertices initially in {1, . . . , l}, and let ar be the number after the rth time a significant edge is chosen. This is a random walk where ar = ar−1 + 1 with probability p and ar = ar−1 − 1 with probability 1 − p, independent of which edges are chosen, and independent of G. We can find the probability that this random walk hits 0 before n, and take a weighted average over a0 (which is Binomial) to get P(S = l) = (l − 2lp + mp)n − (l − 1 − 2lp + 2p + mp)n (m − mp)n − (mp)n .

slide-5
SLIDE 5

Time to reach consensus

The distribution of final results does not depend on the graph, but the time taken to reach consensus does. Conjecture For any n, m and p, the complete graph has the smallest expected time to consensus. Theorem For any n and p, if m = 2 the complete graph has the smallest expected time to consensus of any regular graph.

slide-6
SLIDE 6

Time to reach consensus

What about the worst case? Not the path, at least for p close to 0 (or 1). Define a sundew to be a graph formed from Kn−m by adding m pendant edges, distributed as evenly as possible. We can show that the expected time for a sundew is e(G)(log m ± 1). Setting m = cn gives an expected time which is Θ(n2 log n). The worst-case expected time for the path when p = 0 is (n − 1)2, and in fact we can show that the overall expected time for the path is O(n(log n)2).

slide-7
SLIDE 7

Time to reach consensus

PRISM simulations for a range of plausible graphs suggest that a sundew is the worst case for p close to 0, and a tadpole (formed by adding a pendant path to Kn−m) is the worst case for p close to 1/2.

slide-8
SLIDE 8

Monotonicity

Can we prove that the expected time is always monotonic on [0, 1/2]? Not obvious even in simple cases. For a simple random walk in {0, . . . , n}, moving left or right with probability p and 1 − p, we can show that the time to reach an endpoint is monotonic on [0, 1/2] provided the starting distribution is symmetric. The same applies if there are equal delays at all points – if we move left, move right, or remain with probabilities rp, r(1 − p), 1 − r. What if the value of r depends on where we are, but is symmetric on {0, . . . , n}? This would imply monotonicity for a complete graph.

slide-9
SLIDE 9

Rock-paper-scissors

We look at similar process with three strategies in cyclic order: rock always beats scissors, scissors always beats paper, paper always beats rock. For general graphs, the starting position will affect the outcome. We’ll concentrate on the well-mixed situation (complete graph).

slide-10
SLIDE 10

Rock-paper-scissors

From simulations, it appears that all strategies are roughly equally likely to eventually dominate unless the starting state is very

  • ne-sided.

The eventual winner is determined by the first strategy eliminated; if Rock is initially dominant it is likely Scissors will be eliminated, allowing Paper to win.

slide-11
SLIDE 11

Rock-paper-scissors

If almost all the population are of one type, the numbers of the

  • ther types change like an urn process which starts with w white

balls and b black. At each step a ball is drawn; if white, it is removed, if black, two black balls are returned. Eventually (at time Tw,b) all white balls are eliminated. E(Tw,b) = ∞, but we can bound the probability that the process has ended by a given time. Lemma For any w, b and t with t ≥ w, wb t + wb ≤ P(Tw,b > t) ≤ wb t + b + 1 − w .

slide-12
SLIDE 12

Rock-paper-scissors

Write r for the number of Rock players, etc. It follows that if ps = o(n2/3), Scissors will be eliminated whp before any new Scissors arise, allowing Paper to win. In fact rps is a sub-martingale, and this implies that if ps = o(n) Rock will not be eliminated first, and so either Paper or Rock will win (whp). Would conjecture that if ps = o(n) Paper wins whp but if min(rp, ps, sr) = ω(n) then the probabilities of winning approach (1/3, 1/3, 1/3).

slide-13
SLIDE 13

Symmetric generalisations

What if we have more than three strategies? The first elimination no longer determines the result, but once a strategy is eliminated two others can be lumped together. If Rock is eliminated, then we reduce to the three-strategy model on Scissors, Spock, {Paper, Lizard}.

slide-14
SLIDE 14

Symmetric generalisations

We can take a similar model for larger numbers of strategies. For any odd n, we take strategies 1, . . . , n where strategy i beats strategy i + j (mod n) for each j = 1, . . . , n−1

2 .

There are no coarse-grainings of this, but if we eliminate one strategy (eg 6), two of the others may be lumped together.

slide-15
SLIDE 15

Iterated lumping

Suppose we take this symmetric system for some odd n, and we wish to simulate the Moran process to estimate how often strategy 1 (say) is the last remaining. We could simply run the process until either strategy 1 is eliminated or all others are. A better algorithm is to stop when one strategy is eliminated and lump together the two opposite strategies (provided neither of them is 1), then

  • continue. Every time we do this lumping, we get a smaller

symmetric system with the same properties. This speeds the process up considerably.

slide-16
SLIDE 16

Iterated lumping

We did this starting from 2k individuals playing strategy 1 and k playing each other strategy. The table below gives time taken (mins) for 1000 trials of the naive (red) and the coarse-graining (blue) algorithms. k = 5 k = 10 6.9 24.5 n = 25 2.1 8.1 70% 67% 63.6 229.6 n = 49 14.0 57.3 78% 75% 252 785–912 n = 73 45 192 82% 75–9%

slide-17
SLIDE 17

Random orientations

In more generality, we could use any orientation of the complete graph to define relationships between the strategies (x beats y if the edge is oriented from x to y). If we orient the edges completely at random, how likely is a coarse-graining of the strategies to exist? How do we find one if it does? A coarse-graining will exist if and only if there is some set S with 1 < |S| < n, such that for any x, y ∈ S and z ∈ S, xz and yz are

  • riented the same way.
slide-18
SLIDE 18

Random orientations

An algorithm for finding a coarse-graining is as follows. If all edges from 1 are oriented the same way, then all other strategies can be lumped together; if not split the other strategies into two sets according to whether they beat 1. This gives us a list of two sets; any lump which doesn’t include 1 must be contained in some set in the list. Now iteratively take a set A from the list, give each strategy in A a signature based on the orientations to strategies outside A, then split A by signature. Repeat this operation until either no further changes can be made (in which case we have a coarse-graining) or every set in the list is a singleton (in which case no coarse-graining without 1 exists). We can then repeat this process starting by removing 2, 3, etc., and will eventually find all coarse-grainings.

slide-19
SLIDE 19

Random orientations

Simulation results for how likely a coarse-graining is to exist (of 10000 trials). 3 4 5 6 7 8 9 10 7489 10000 7460 6897 5345 4059 2753 1807 11 12 13 14 15 16 17 18 1176 682 397 250 154 90 38 22 Theorem 1 Write pn for the probability that a random orientation of Kn has a coarse-graining. Then pn = n2 + n + o(1) 2n−1 .

slide-20
SLIDE 20

Random orientations

How likely is it that there is a coarse-graining either initially, or when some strategy is eliminated? 3 4 5 6 7 8 9 10 7489 10000 10000 9911 9901 9667 9120 8155 11 12 13 14 15 16 17 18 6787 5417 3832 2655 1846 1142 700 394 Theorem 2 Write qn for the probability that a coarse graining exists either initially or after eliminating some strategy. Then qn = n3 + n + o(1) 2n−1 .

slide-21
SLIDE 21

Balanced orientations

If we restrict our orientation to be balanced, that is to say every vertex has exactly half its edges going out, the chance of a coarse-graining existing initially is very low, but there is still a significant chance that a coarse-graining exists after one strategy is removed. initially

  • ne removed

neither 7 vertices 2 1 9 vertices 1 11 3 11 vertices 3 729 491 13 vertices 428 532000 962869

slide-22
SLIDE 22

Preferential attachment with choice

(joint with Jonathan Jordan, Sheffield) Preferential attachment is a class of evolving graph processes which aims to model behaviour found in real-world growing networks, eg internet, facebook. The simplest model was introduced by Barab´ asi and Albert (1999). Grow a tree, starting from two connected vertices. At each step, a new vertex is added, and it forms a single edge to an existing

  • vertex. The vertex to connect to is chosen at random, with

probability proportional to its existing degree. There is a natural coarse-graining of this process, by reducing the graph to its degree sequence. The proportion of vertices of degree k tends to a specific limit πk with high probability. Bollob´ as, Riordan, Spencer and Tusn´ ady proved that πk = Θ(k−3).

slide-23
SLIDE 23

Preferential attachment with choice

Malyshkin and Paquette recently introduced a new class of preferential attachment tree models. New nodes first make two independent choices (again proportional to degree). In the max choice model, the new vertex links to the higher-degree of its two choices; in the min-choice model, to the lower-degree. Max choice Pref attach Min choice

slide-24
SLIDE 24

Preferential attachment with choice

To generalise, we might make r ≥ 2 choices and then attach to that of rank s. Rather than the proportion of vertices of degree k, it is more convenient to consider the probability of a single preferential choice giving a vertex of degree at most k. Again, this approaches a fixed limit pk (which depends on r, s). Malyshkin and Paquette showed that if r > 2 and s = 1, there is a vertex with degree growing linearly, so pk is bounded away from 1. Conversely, they showed that if r = s = 2, 1 − pk decays doubly-exponentially. Conjecture (Krapivsky and Redner) Doubly-exponential decay happens whenever s ≥ 2, and is of the form Csk.

slide-25
SLIDE 25

Preferential attachment with choice

Theorem 1 If r ≥ s ≥ 2, pk → p∗, where p∗ is the smallest positive root of P(Bin(r, p) > r − s) − 2p + 1 = 0 . For s = 2, p∗ = 1 only if r < 7. In general, for fixed s there exists r(s) such that p∗ < 1 iff r < r(s).

slide-26
SLIDE 26

Preferential attachment with choice

Theorem 2 For every s ≥ 2, r(s) < ∞ and r(s) ≥ 2s. Also, for any c1, c2 > 0, if s is sufficiently large then 2s + c1 √s < r(s) < 2s + c2s . We also show that doubly-exponential decay occurs whenever possible. Theorem 3 If p∗ = 1 then − log(1 − pk) = Θ(sk).

slide-27
SLIDE 27

Preferential attachment with choice

So in all cases when s ≥ 2, either 1 − pk is bounded away from 0

  • r it decays doubly-exponentially. If s = 1 and r > 2 we also have

the former behaviour. There is one exceptional case. Theorem 4 If r = 2 and s = 1, pk = 1 − 2 + o(1) log(k + 1) .