hieratic hierarchical analysis of complex dynamical
play

HIERATIC Hierarchical Analysis of Complex Dynamical Systems WP6 - PowerPoint PPT Presentation

HIERATIC Hierarchical Analysis of Complex Dynamical Systems WP6 Formal theoretical results concerning network dynamics John Haslegrave, Chris Cannings The University of Sheffield December 11, 2014 Overview 1 Tournament process 2 Cyclic


  1. HIERATIC – Hierarchical Analysis of Complex Dynamical Systems WP6 – Formal theoretical results concerning network dynamics John Haslegrave, Chris Cannings The University of Sheffield December 11, 2014

  2. Overview 1 Tournament process 2 Cyclic relationships in Moran process Rock-paper-scissors Symmetric generalisations Coarse-graining random generalisations 3 Preferential attachment with choice

  3. Tournament process We consider a random process on a connected graph G with n vertices. Each vertex has a strategy taken from { 1 , . . . , m } at random. At each time step an edge is chosen; both vertices take on the lower of the two strategies with probability p and the higher with probability 1 − p . Since G is connected, eventually we will reach a state where only one strategy remains. Write S for the strategy that is left; then P ( S � l ) can be computed by coarse-graining the strategies into those at most l and those exceeding l .

  4. Probability of survival Write a 0 for the number of vertices initially in { 1 , . . . , l } , and let a r be the number after the r th time a significant edge is chosen. This is a random walk where a r = a r − 1 + 1 with probability p and a r = a r − 1 − 1 with probability 1 − p , independent of which edges are chosen, and independent of G . We can find the probability that this random walk hits 0 before n , and take a weighted average over a 0 (which is Binomial) to get P ( S = l ) = ( l − 2 lp + mp ) n − ( l − 1 − 2 lp + 2 p + mp ) n . ( m − mp ) n − ( mp ) n

  5. Time to reach consensus The distribution of final results does not depend on the graph, but the time taken to reach consensus does. Conjecture For any n , m and p , the complete graph has the smallest expected time to consensus. Theorem For any n and p , if m = 2 the complete graph has the smallest expected time to consensus of any regular graph.

  6. Time to reach consensus What about the worst case? Not the path, at least for p close to 0 (or 1). Define a sundew to be a graph formed from K n − m by adding m pendant edges, distributed as evenly as possible. We can show that the expected time for a sundew is e ( G )(log m ± 1) . Setting m = cn gives an expected time which is Θ( n 2 log n ) . The worst-case expected time for the path when p = 0 is ( n − 1) 2 , and in fact we can show that the overall expected time for the path is O ( n (log n ) 2 ) .

  7. Time to reach consensus PRISM simulations for a range of plausible graphs suggest that a sundew is the worst case for p close to 0 , and a tadpole (formed by adding a pendant path to K n − m ) is the worst case for p close to 1 / 2 .

  8. Monotonicity Can we prove that the expected time is always monotonic on [0 , 1 / 2] ? Not obvious even in simple cases. For a simple random walk in { 0 , . . . , n } , moving left or right with probability p and 1 − p , we can show that the time to reach an endpoint is monotonic on [0 , 1 / 2] provided the starting distribution is symmetric. The same applies if there are equal delays at all points – if we move left, move right, or remain with probabilities rp, r (1 − p ) , 1 − r . What if the value of r depends on where we are, but is symmetric on { 0 , . . . , n } ? This would imply monotonicity for a complete graph.

  9. Rock-paper-scissors We look at similar process with three strategies in cyclic order: rock always beats scissors, scissors always beats paper, paper always beats rock. For general graphs, the starting position will affect the outcome. We’ll concentrate on the well-mixed situation (complete graph).

  10. Rock-paper-scissors From simulations, it appears that all strategies are roughly equally likely to eventually dominate unless the starting state is very one-sided. The eventual winner is determined by the first strategy eliminated; if Rock is initially dominant it is likely Scissors will be eliminated, allowing Paper to win.

  11. Rock-paper-scissors If almost all the population are of one type, the numbers of the other types change like an urn process which starts with w white balls and b black. At each step a ball is drawn; if white, it is removed, if black, two black balls are returned. Eventually (at time T w,b ) all white balls are eliminated. E ( T w,b ) = ∞ , but we can bound the probability that the process has ended by a given time. Lemma For any w , b and t with t ≥ w , wb wb t + wb ≤ P ( T w,b > t ) ≤ t + b + 1 − w .

  12. Rock-paper-scissors Write r for the number of Rock players, etc. It follows that if ps = o ( n 2 / 3 ) , Scissors will be eliminated whp before any new Scissors arise, allowing Paper to win. In fact rps is a sub-martingale, and this implies that if ps = o ( n ) Rock will not be eliminated first, and so either Paper or Rock will win (whp). Would conjecture that if ps = o ( n ) Paper wins whp but if min( rp, ps, sr ) = ω ( n ) then the probabilities of winning approach (1 / 3 , 1 / 3 , 1 / 3) .

  13. Symmetric generalisations What if we have more than three strategies? The first elimination no longer determines the result, but once a strategy is eliminated two others can be lumped together. If Rock is eliminated, then we reduce to the three-strategy model on Scissors, Spock, { Paper, Lizard } .

  14. Symmetric generalisations We can take a similar model for larger numbers of strategies. For any odd n , we take strategies 1 , . . . , n where strategy i beats strategy i + j (mod n ) for each j = 1 , . . . , n − 1 2 . There are no coarse-grainings of this, but if we eliminate one strategy (eg 6), two of the others may be lumped together.

  15. Iterated lumping Suppose we take this symmetric system for some odd n , and we wish to simulate the Moran process to estimate how often strategy 1 (say) is the last remaining. We could simply run the process until either strategy 1 is eliminated or all others are. A better algorithm is to stop when one strategy is eliminated and lump together the two opposite strategies (provided neither of them is 1), then continue. Every time we do this lumping, we get a smaller symmetric system with the same properties. This speeds the process up considerably.

  16. Iterated lumping We did this starting from 2 k individuals playing strategy 1 and k playing each other strategy. The table below gives time taken (mins) for 1000 trials of the naive (red) and the coarse-graining (blue) algorithms. k = 5 k = 10 6.9 24.5 n = 25 2.1 8.1 70% 67% 63.6 229.6 n = 49 14.0 57.3 78% 75% 252 785–912 n = 73 45 192 82% 75–9%

  17. Random orientations In more generality, we could use any orientation of the complete graph to define relationships between the strategies ( x beats y if the edge is oriented from x to y ). If we orient the edges completely at random, how likely is a coarse-graining of the strategies to exist? How do we find one if it does? A coarse-graining will exist if and only if there is some set S with 1 < | S | < n , such that for any x, y ∈ S and z �∈ S , xz and yz are oriented the same way.

  18. Random orientations An algorithm for finding a coarse-graining is as follows. If all edges from 1 are oriented the same way, then all other strategies can be lumped together; if not split the other strategies into two sets according to whether they beat 1. This gives us a list of two sets; any lump which doesn’t include 1 must be contained in some set in the list. Now iteratively take a set A from the list, give each strategy in A a signature based on the orientations to strategies outside A , then split A by signature. Repeat this operation until either no further changes can be made (in which case we have a coarse-graining) or every set in the list is a singleton (in which case no coarse-graining without 1 exists). We can then repeat this process starting by removing 2, 3, etc., and will eventually find all coarse-grainings.

  19. Random orientations Simulation results for how likely a coarse-graining is to exist (of 10000 trials). 3 4 5 6 7 8 9 10 7489 10000 7460 6897 5345 4059 2753 1807 11 12 13 14 15 16 17 18 1176 682 397 250 154 90 38 22 Theorem 1 Write p n for the probability that a random orientation of K n has a coarse-graining. Then p n = n 2 + n + o (1) . 2 n − 1

  20. Random orientations How likely is it that there is a coarse-graining either initially, or when some strategy is eliminated? 3 4 5 6 7 8 9 10 7489 10000 10000 9911 9901 9667 9120 8155 11 12 13 14 15 16 17 18 6787 5417 3832 2655 1846 1142 700 394 Theorem 2 Write q n for the probability that a coarse graining exists either initially or after eliminating some strategy. Then q n = n 3 + n + o (1) . 2 n − 1

  21. Balanced orientations If we restrict our orientation to be balanced, that is to say every vertex has exactly half its edges going out, the chance of a coarse-graining existing initially is very low, but there is still a significant chance that a coarse-graining exists after one strategy is removed. initially one removed neither 7 vertices 0 2 1 9 vertices 1 11 3 11 vertices 3 729 491 13 vertices 428 532000 962869

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend