bounding the convergence of mixing and consensus
play

Bounding the Convergence of Mixing and Consensus Algorithms Simon - PowerPoint PPT Presentation

Bounding the Convergence of Mixing and Consensus Algorithms Simon Apers 1 , Alain Sarlette 1,2 & Francesco Ticozzi 3,4 1 Ghent University, 2 INRIA Paris, 3 University of Padova, 4 Dartmouth College arXiv:1711.06024,1705.08253,1712.01609


  1. Bounding the Convergence of Mixing and Consensus Algorithms Simon Apers 1 , Alain Sarlette 1,2 & Francesco Ticozzi 3,4 1 Ghent University, 2 INRIA Paris, 3 University of Padova, 4 Dartmouth College arXiv:1711.06024,1705.08253,1712.01609

  2. dynamics on graphs: ● diffusion ● rumour spreading ● weight balancing ● quantum walks ● ... 2

  3. dynamics on graphs: ● diffusion ● rumour spreading ● weight balancing ● quantum walks ● ... under appropriate conditions: dynamics will “mix” (converge, equilibrate) 2

  4. dynamics on graphs: ● diffusion ● rumour spreading ● weight balancing ● quantum walks ● ... under appropriate conditions: dynamics will “mix” (converge, equilibrate) time scale = “mixing time” 2

  5. example: random walk on dumbbell graph 3

  6. example: random walk on dumbbell graph 3

  7. example: random walk on dumbbell graph 3

  8. example: random walk on dumbbell graph mixing time: 3

  9. example: random walk on dumbbell graph 4

  10. example: random walk on dumbbell graph conductance bound: 4

  11. example: random walk on dumbbell graph conductance bound: 4

  12. example: random walk on dumbbell graph conductance bound: 4

  13. example: random walk on dumbbell graph conductance bound: proof idea: 5

  14. example: random walk on dumbbell graph 6

  15. example: random walk on dumbbell graph however, diameter = 3 can we do any better ? 6

  16. example: random walk on dumbbell graph however, diameter = 3 can we do any better ? yes: improve central hub 6

  17. example: random walk on dumbbell graph however, diameter = 3 can we do any better ? yes: improve central hub 6

  18. example: random walk on dumbbell graph however, diameter = 3 can we do any better ? yes: improve central hub 6

  19. example: random walk on dumbbell graph however, diameter = 3 can we do any better ? yes: improve central hub 6

  20. example: random walk on dumbbell graph 7

  21. example: random walk on dumbbell graph however, diameter = 3 can we do any better ? 7

  22. example: random walk on dumbbell graph however, diameter = 3 can we do any better ? not using simple Markov chains: 7

  23. example: random walk on dumbbell graph however, diameter = 3 can we do any better ? not using simple Markov chains: what if we allow time dependence? memory? quantum dynamics? 7

  24. example: random walk on dumbbell graph however, diameter = 3 can we do any better ? not using simple Markov chains: what if we allow time dependence? memory? quantum dynamics? e.g. non-backtracking random walks, lifted Markov chains, simulated annealing, 7 polynomial filters, quantum walks,...

  25. stochastic process 8

  26. stochastic process 8

  27. stochastic process ● linear 8

  28. stochastic process ● linear ● local 8

  29. stochastic process ● linear ● local ● invariant 8

  30. stochastic process examples of linear, local and invariant stochastic processes: 9

  31. stochastic process examples of linear, local and invariant stochastic processes: ● Markov chains, time-averaged MCs, time-inhomogeneous invariant MCs 9

  32. stochastic process examples of linear, local and invariant stochastic processes: ● Markov chains, time-averaged MCs, time-inhomogeneous invariant MCs ● lifted MCs, non-backtracking RWs on regular graphs 9

  33. stochastic process examples of linear, local and invariant stochastic processes: ● Markov chains, time-averaged MCs, time-inhomogeneous invariant MCs ● lifted MCs, non-backtracking RWs on regular graphs ● imprecise Markov chains, sets of doubly-stochastic matrices 9

  34. stochastic process examples of linear, local and invariant stochastic processes: ● Markov chains, time-averaged MCs, time-inhomogeneous invariant MCs ● lifted MCs, non-backtracking RWs on regular graphs ● imprecise Markov chains, sets of doubly-stochastic matrices ● quantum walks and quantum Markov chains 9

  35. stochastic process main theorem: any linear, local and invariant stochastic process has a mixing time 10

  36. stochastic process main theorem: any linear, local and invariant stochastic process has a mixing time 10

  37. stochastic process on dumbell graph: main theorem: any linear, local and invariant stochastic process has a mixing time 10

  38. main theorem: any linear, local and invariant stochastic process has a mixing time 11

  39. main theorem: any linear, local and invariant stochastic process has a mixing time proof: 11

  40. main theorem: any linear, local and invariant stochastic process has a mixing time proof: 1) we build a Markov chain simulator 11

  41. main theorem: any linear, local and invariant stochastic process has a mixing time proof: 1) we build a Markov chain simulator 2) we prove the theorem for Markov chain simulator 11

  42. 1) Markov chain simulator of linear, local and invariant stochastic process: 12

  43. 1) Markov chain simulator of linear, local and invariant stochastic process: 12

  44. 1) Markov chain simulator of linear, local and invariant stochastic process: proof: max-flow min-cut argument 12

  45. 1) Markov chain simulator of linear, local and invariant stochastic process: proof: max-flow min-cut argument 12

  46. 1) Markov chain simulator of linear, local and invariant stochastic process: proof: max-flow min-cut argument 12

  47. 1) Markov chain simulator of linear, local and invariant stochastic process: proof: max-flow min-cut argument 12

  48. 1) Markov chain simulator of linear, local and invariant stochastic process: proof: max-flow min-cut argument 12

  49. 1) Markov chain simulator of linear, local and invariant stochastic process: 13

  50. 1) Markov chain simulator of linear, local and invariant stochastic process: 13

  51. 1) Markov chain simulator of linear, local and invariant stochastic process: if stochastic process is linear and local, then this transition rule simulates the process: 13

  52. 1) Markov chain simulator of linear, local and invariant stochastic process: 14

  53. 1) Markov chain simulator of linear, local and invariant stochastic process: ! rule is non-Markovian: depends on initial state and time 14

  54. 1) Markov chain simulator of linear, local and invariant stochastic process: ! rule is non-Markovian: depends on initial state and time classic trick: give walker a timer and a memory of initial state 14

  55. 1) Markov chain simulator of linear, local and invariant stochastic process: ! rule is non-Markovian: depends on initial state and time classic trick: give walker a timer and a memory of initial state = MC on enlarged state space (“lifted MC”) 14

  56. 1) Markov chain simulator of linear, local and invariant stochastic process: ! rule is non-Markovian: depends on initial state and time classic trick: give walker a timer and a memory of initial state = MC on enlarged state space (“lifted MC”) 14

  57. 1) Markov chain simulator of linear, local and invariant stochastic process: ! rule is non-Markovian: depends on initial state and time classic trick: give walker a timer and a memory of initial state = MC on enlarged state space (“lifted MC”) 14

  58. 1) Markov chain simulator of linear, local and invariant stochastic process: simulates up to time T 15

  59. 1) Markov chain simulator of linear, local and invariant stochastic process: simulates up to time T second trick: if process is invariant, then we can “amplify” 15

  60. 1) Markov chain simulator of linear, local and invariant stochastic process: simulates up to time T second trick: if process is invariant, then we can “amplify” = restart the simulation every time timer reaches T 15

  61. 1) Markov chain simulator of linear, local and invariant stochastic process: simulates up to time T second trick: if process is invariant, then we can “amplify” = restart the simulation every time timer reaches T proposition: the (asymptotic) mixing time of this amplified simulator closely relates to 15 the (asymptotic) mixing time of the original process

  62. 2) Markov chain simulator obeys a conductance bound: 16

  63. 2) Markov chain simulator obeys a conductance bound: simulator is Markov chain on enlarged state space: 16

  64. 2) Markov chain simulator obeys a conductance bound: simulator is Markov chain on enlarged state space: + conductance cannot be increased by lifting 16

  65. 2) Markov chain simulator obeys a conductance bound: simulator is Markov chain on enlarged state space: + conductance cannot be increased by lifting = main theorem: any linear, local and invariant stochastic process has a mixing time 16

  66. main theorem: any linear, local and invariant stochastic process has a mixing time example 1: dumbbell graph 17

  67. main theorem: any linear, local and invariant stochastic process has a mixing time example 1: dumbbell graph any linear, local and invariant stochastic process on the dumbbell graph has a mixing time 17

  68. main theorem: any linear, local and invariant stochastic process has a mixing time example 2: binary tree 18

  69. main theorem: any linear, local and invariant stochastic process has a mixing time example 2: binary tree 18

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend