On Stabilization in Hermans Algorithm ICALP 2011 Stefan Kiefer 1 - - PowerPoint PPT Presentation

on stabilization in herman s algorithm
SMART_READER_LITE
LIVE PREVIEW

On Stabilization in Hermans Algorithm ICALP 2011 Stefan Kiefer 1 - - PowerPoint PPT Presentation

On Stabilization in Hermans Algorithm ICALP 2011 Stefan Kiefer 1 Andrzej S. Murawski 2 el Ouaknine 1 Jo James Worrell 1 Lijun Zhang 3 1 Department of Computer Science, University of Oxford, UK 2 Department of Computer Science, University of


slide-1
SLIDE 1

On Stabilization in Herman’s Algorithm

ICALP 2011 Stefan Kiefer1 Andrzej S. Murawski2 Jo¨ el Ouaknine1 James Worrell1 Lijun Zhang3

1Department of Computer Science, University of Oxford, UK 2Department of Computer Science, University of Leicester, UK 3DTU Informatics, Technical University of Denmark, Denmark

slide-2
SLIDE 2

Particle Collision: CERN

slide-3
SLIDE 3

Herman’s Algorithm

Herman’s algorithm is a randomized leader election protocol. Ring of N processors (N is odd) Some processors have a token. Pick a token randomly and move it clockwise. When a processor has two tokens: both disappear. Once there is only one token: this processor is the leader.

slide-4
SLIDE 4

Herman’s Algorithm

Herman’s algorithm is a randomized leader election protocol. Ring of N processors (N is odd) Some processors have a token. Pick a token randomly and move it clockwise. When a processor has two tokens: both disappear. Once there is only one token: this processor is the leader.

slide-5
SLIDE 5

Herman’s Algorithm

Herman’s algorithm is a randomized leader election protocol. Ring of N processors (N is odd) Some processors have a token. Pick a token randomly and move it clockwise. When a processor has two tokens: both disappear. Once there is only one token: this processor is the leader.

slide-6
SLIDE 6

Herman’s Algorithm

Herman’s algorithm is a randomized leader election protocol. Ring of N processors (N is odd) Some processors have a token. Pick a token randomly and move it clockwise. When a processor has two tokens: both disappear. Once there is only one token: this processor is the leader.

slide-7
SLIDE 7

Herman’s Algorithm

Herman’s algorithm is a randomized leader election protocol. Ring of N processors (N is odd) Some processors have a token. Pick a token randomly and move it clockwise. When a processor has two tokens: both disappear. Once there is only one token: this processor is the leader.

slide-8
SLIDE 8

Herman’s Algorithm

Herman’s algorithm is a randomized leader election protocol. Ring of N processors (N is odd) Some processors have a token. Pick a token randomly and move it clockwise. When a processor has two tokens: both disappear. Once there is only one token: this processor is the leader.

slide-9
SLIDE 9

Herman’s Algorithm

Herman’s algorithm is a randomized leader election protocol. Ring of N processors (N is odd) Some processors have a token. Pick a token randomly and move it clockwise. When a processor has two tokens: both disappear. Once there is only one token: this processor is the leader.

slide-10
SLIDE 10

Herman’s Algorithm

Herman’s algorithm is a randomized leader election protocol. 1 1 1 1 1 1 Ring of N processors (N is odd) Some processors have a token. Pick a token randomly and move it clockwise. When a processor has two tokens: both disappear. Once there is only one token: this processor is the leader. Implementation: token = same bit as counter-clockwise neighbour

slide-11
SLIDE 11

Herman’s Algorithm

Herman’s algorithm is a randomized leader election protocol. 1 1 1 1 1 1 1 Ring of N processors (N is odd) Some processors have a token. Pick a token randomly and move it clockwise. When a processor has two tokens: both disappear. Once there is only one token: this processor is the leader. Implementation: token = same bit as counter-clockwise neighbour

slide-12
SLIDE 12

Herman’s Algorithm

Herman’s algorithm is a randomized leader election protocol. 1 1 1 1 1 1 1 1 Ring of N processors (N is odd) Some processors have a token. Pick a token randomly and move it clockwise. When a processor has two tokens: both disappear. Once there is only one token: this processor is the leader. Implementation: token = same bit as counter-clockwise neighbour

slide-13
SLIDE 13

Herman’s Algorithm

Herman’s algorithm is a randomized leader election protocol. 1 1 1 1 1 1 1 Ring of N processors (N is odd) Some processors have a token. Pick a token randomly and move it clockwise. When a processor has two tokens: both disappear. Once there is only one token: this processor is the leader. Implementation: token = same bit as counter-clockwise neighbour

slide-14
SLIDE 14

Herman’s Algorithm

Herman’s algorithm is a randomized leader election protocol. 1 1 1 1 1 1 1 Ring of N processors (N is odd) Some processors have a token. Pick a token randomly and move it clockwise. When a processor has two tokens: both disappear. Once there is only one token: this processor is the leader. Implementation: token = same bit as counter-clockwise neighbour

slide-15
SLIDE 15

Herman’s Algorithm

Herman’s algorithm is a randomized leader election protocol. 1 1 1 1 1 1 1 Ring of N processors (N is odd) Some processors have a token. Pick a token randomly and move it clockwise. When a processor has two tokens: both disappear. Once there is only one token: this processor is the leader. Implementation: token = same bit as counter-clockwise neighbour Number of tokens is odd. At any time. Guaranteed!

slide-16
SLIDE 16

Herman’s Algorithm

Herman’s algorithm is a randomized leader election protocol. 1 1 1 1 1 1 1 Ring of N processors (N is odd) Some processors have a token. Pick a token randomly and move it clockwise. When a processor has two tokens: both disappear. Once there is only one token: this processor is the leader. Implementation: token = same bit as counter-clockwise neighbour Number of tokens is odd. At any time. Guaranteed! Main Question: How long does it take – on average – until a leader is elected?

slide-17
SLIDE 17

Herman’s Algorithm

Two versions of Herman’s algorithm: Asynchronous: Each processor with a token passes the token to its clockwise neighbour with rate λ continuous-time Markov chain (as seen on the previous slides) Synchronous: Each processor with a token with probability 1/2: passes it (i.e., flips its bit) with probability 1/2: keeps it (i.e., keeps its bit) discrete-time Markov chain

slide-18
SLIDE 18

Herman’s Algorithm

Illustration of the synchronous version:

slide-19
SLIDE 19

Herman’s Algorithm

Illustration of the synchronous version:

slide-20
SLIDE 20

Herman’s Algorithm

Illustration of the synchronous version:

slide-21
SLIDE 21

Herman’s Algorithm

Illustration of the synchronous version:

slide-22
SLIDE 22

Herman’s Algorithm

Illustration of the synchronous version: The synchronous version is standard. The probability of passing is usually chosen as 1/2, but is a parameter in our analysis. We study both versions (asynchronous and synchronous).

slide-23
SLIDE 23

Self-Stabilization

Self-stabilization: a concept of fault-tolerance: Any configuration leads to a “legitimate” configuration (in Herman: a single-token configuration). Dijkstra (1974): In a token ring there is no symmetric deterministic self-stabilizing algorithm.

slide-24
SLIDE 24

Self-Stabilization

Self-stabilization: a concept of fault-tolerance: Any configuration leads to a “legitimate” configuration (in Herman: a single-token configuration). Dijkstra (1974): In a token ring there is no symmetric deterministic self-stabilizing algorithm.

slide-25
SLIDE 25

Time to Stabilization

Let T := time until a 1-token configuration is reached.

What is ET?

Markov Chain argument ET is finite for any initial conf.

slide-26
SLIDE 26

Time to Stabilization

Let T := time until a 1-token configuration is reached.

What is ET?

Markov Chain argument ET is finite for any initial conf. 1990 Herman: invented the protocol showed ET ≤ 0.5N2 log N mentions improvement to ET ≤ c N2 2004 Fribourg et al.: ET ≤ 2N2 2005 McIver/Morgan: ET ≤ 2N2 (independently) 2005 Nakata: ET ≤ 0.93N2

slide-27
SLIDE 27

Time to Stabilization

Let T := time until a 1-token configuration is reached.

What is ET?

Markov Chain argument ET is finite for any initial conf. 1990 Herman: invented the protocol showed ET ≤ 0.5N2 log N mentions improvement to ET ≤ c N2 2004 Fribourg et al.: ET ≤ 2N2 2005 McIver/Morgan: ET ≤ 2N2 (independently) 2005 Nakata: ET ≤ 0.93N2 Theorem Let D = r(1 − r) (for synchronous) or D = λ (for asynchronous). Then ET ≤ π2 8 − 29 27

  • · N2

D . In particular, for synchronous and r = 1/2, we have ET ≤ 0.64N2.

slide-28
SLIDE 28

The 3-Token Case

slide-29
SLIDE 29

The 3-Token Case

a b c Precise formula for 3 tokens [MM’05]: ET = 4abc/N

slide-30
SLIDE 30

The 3-Token Case

a b c Precise formula for 3 tokens [MM’05]: ET = 4abc/N Maximized for equilateral configuration with a = b = c = N/3 ET = 4

27N2 = 0.148N2

slide-31
SLIDE 31

The 3-Token Case

a b c Precise formula for 3 tokens [MM’05]: ET = 4abc/N Maximized for equilateral configuration with a = b = c = N/3 ET = 4

27N2 = 0.148N2

Open conjecture [MM’05]: 3-token equilateral conf. is worse than any other conf. If true, then ET ≤ 0.148N2 (cf. ET ≤ 0.64N2).

slide-32
SLIDE 32

The Full Configuration

Theorem Let D = r(1 − r) (for synchronous) or D = λ (for asynchronous). Starting in the full configuration, for almost all odd N: ET ≤ 0.0285N2/D . In particular, for synchronous and r = 1/2, we have ET ≤ 0.114N2. (cf. for general conf.: ET ≤ 0.64N2 for equilateral conf.: ET = 0.148N2)

slide-33
SLIDE 33

The Full Configuration

Theorem Let D = r(1 − r) (for synchronous) or D = λ (for asynchronous). Starting in the full configuration, for almost all odd N: ET ≤ 0.0285N2/D . In particular, for synchronous and r = 1/2, we have ET ≤ 0.114N2. (cf. for general conf.: ET ≤ 0.64N2 for equilateral conf.: ET = 0.148N2) What about a random initial configuration?

slide-34
SLIDE 34

The Full Configuration

Theorem Let D = r(1 − r) (for synchronous) or D = λ (for asynchronous). Starting in the full configuration, for almost all odd N: ET ≤ 0.0285N2/D . In particular, for synchronous and r = 1/2, we have ET ≤ 0.114N2. (cf. for general conf.: ET ≤ 0.64N2 for equilateral conf.: ET = 0.148N2) What about a random initial configuration? A random initial configuration is reached after the first step.

slide-35
SLIDE 35

The Full Configuration

Theorem Let D = r(1 − r) (for synchronous) or D = λ (for asynchronous). Starting in the full configuration, for almost all odd N: ET ≤ 0.0285N2/D . In particular, for synchronous and r = 1/2, we have ET ≤ 0.114N2. (cf. for general conf.: ET ≤ 0.64N2 for equilateral conf.: ET = 0.148N2) What about a random initial configuration? A random initial configuration is reached after the first step.

slide-36
SLIDE 36

The Full Configuration

Theorem Let D = r(1 − r) (for synchronous) or D = λ (for asynchronous). Starting in the full configuration, for almost all odd N: ET ≤ 0.0285N2/D . In particular, for synchronous and r = 1/2, we have ET ≤ 0.114N2. (cf. for general conf.: ET ≤ 0.64N2 for equilateral conf.: ET = 0.148N2) What about a random initial configuration? A random initial configuration is reached after the first step.

slide-37
SLIDE 37

The Full Configuration

Theorem Let D = r(1 − r) (for synchronous) or D = λ (for asynchronous). Starting in the full configuration, for almost all odd N: ET ≤ 0.0285N2/D . In particular, for synchronous and r = 1/2, we have ET ≤ 0.114N2. (cf. for general conf.: ET ≤ 0.64N2 for equilateral conf.: ET = 0.148N2) What about a random initial configuration? A random initial configuration is reached after the first step. no need for a different analysis

slide-38
SLIDE 38

Restabilization

Imagine: starting from 1 token . . . 1 1 1 1 1 1

slide-39
SLIDE 39

Restabilization

Imagine: starting from 1 token . . . 2 bit-errors occur 1 1 1 1 1 1

slide-40
SLIDE 40

Restabilization

Imagine: starting from 1 token . . . 2 bit-errors occur 1 1 1 1 1 1 “flip-2 configuration”

slide-41
SLIDE 41

Restabilization

Imagine: starting from 1 token . . . 2 bit-errors occur 1 1 1 1 1 1 “flip-2 configuration”

slide-42
SLIDE 42

Restabilization

Imagine: starting from 1 token . . . 2 bit-errors occur 1 1 1 1 1 1 “flip-2 configuration”

slide-43
SLIDE 43

Restabilization

Imagine: starting from 1 token . . . 2 bit-errors occur “flip-2 configuration” Theorem Consider the synchronous protocol with r ∈ (0.12, 0.88). Fix m. Then for any flip-m configuration: ET = O(N) (cf. O(N2) for all other bounds)

slide-44
SLIDE 44

Interacting Particle Systems

Interacting particles under random motion occur in: statistical mechanics neural networks spread of infections tumor growth . . . (other physical and medical sciences) physical chemistry [Balding’88]

slide-45
SLIDE 45

Analysis Techniques

Key ingredients: Taking the limit N → ∞ leads to Brownian motion.

slide-46
SLIDE 46

Analysis Techniques

Key ingredients: Taking the limit N → ∞ leads to Brownian motion.

slide-47
SLIDE 47

Analysis Techniques

Key ingredients: Taking the limit N → ∞ leads to Brownian motion.

slide-48
SLIDE 48

Analysis Techniques

Key ingredients: Taking the limit N → ∞ leads to Brownian motion.

slide-49
SLIDE 49

Analysis Techniques

Key ingredients: Taking the limit N → ∞ leads to Brownian motion.

slide-50
SLIDE 50

Analysis Techniques

Key ingredients: Taking the limit N → ∞ leads to Brownian motion.

slide-51
SLIDE 51

Analysis Techniques

Key ingredients: Taking the limit N → ∞ leads to Brownian motion.

slide-52
SLIDE 52

Analysis Techniques

Key ingredients: Taking the limit N → ∞ leads to Brownian motion.

slide-53
SLIDE 53

Analysis Techniques

Key ingredients: Taking the limit N → ∞ leads to Brownian motion.

slide-54
SLIDE 54

Analysis Techniques

Key ingredients: Taking the limit N → ∞ leads to Brownian motion. Tokens with two colours simplify the analysis of the possible interactions.

slide-55
SLIDE 55

Analysis Techniques

Key ingredients: Taking the limit N → ∞ leads to Brownian motion. Tokens with two colours simplify the analysis of the possible interactions.

slide-56
SLIDE 56

Analysis Techniques

Key ingredients: Taking the limit N → ∞ leads to Brownian motion. Tokens with two colours simplify the analysis of the possible interactions.

slide-57
SLIDE 57

Analysis Techniques

Key ingredients: Taking the limit N → ∞ leads to Brownian motion. Tokens with two colours simplify the analysis of the possible interactions.

slide-58
SLIDE 58

Analysis Techniques

Key ingredients: Taking the limit N → ∞ leads to Brownian motion. Tokens with two colours simplify the analysis of the possible interactions.

slide-59
SLIDE 59

Analysis Techniques

Key ingredients: Taking the limit N → ∞ leads to Brownian motion. Tokens with two colours simplify the analysis of the possible interactions.

slide-60
SLIDE 60

Analysis Techniques

Key ingredients: Taking the limit N → ∞ leads to Brownian motion. Tokens with two colours simplify the analysis of the possible interactions.

slide-61
SLIDE 61

Analysis Techniques

Key ingredients: Taking the limit N → ∞ leads to Brownian motion. Tokens with two colours simplify the analysis of the possible interactions.

slide-62
SLIDE 62

Analysis Techniques

Key ingredients: Taking the limit N → ∞ leads to Brownian motion. Tokens with two colours simplify the analysis of the possible interactions.

slide-63
SLIDE 63

Analysis Techniques

Key ingredients: Taking the limit N → ∞ leads to Brownian motion. Tokens with two colours simplify the analysis of the possible interactions.

slide-64
SLIDE 64

Analysis Techniques

Key ingredients: Taking the limit N → ∞ leads to Brownian motion. Tokens with two colours simplify the analysis of the possible interactions.

slide-65
SLIDE 65

Analysis Techniques

Key ingredients: Taking the limit N → ∞ leads to Brownian motion. Tokens with two colours simplify the analysis of the possible interactions.

slide-66
SLIDE 66

Analysis Techniques

Key ingredients: Taking the limit N → ∞ leads to Brownian motion. Tokens with two colours simplify the analysis of the possible interactions.

slide-67
SLIDE 67

Analysis Techniques

Key ingredients: Taking the limit N → ∞ leads to Brownian motion. Tokens with two colours simplify the analysis of the possible interactions.

slide-68
SLIDE 68

Analysis Techniques

Key ingredients: Taking the limit N → ∞ leads to Brownian motion. Tokens with two colours simplify the analysis of the possible interactions.

slide-69
SLIDE 69

Two Colours: 3 Tokens

C B A For any time t ≥ 0: Pr(T ≤ t) = Pr(A → B) − Pr(B → A) + Pr(B → C) − Pr(C → B) + Pr(C → A) − Pr(A → C)

slide-70
SLIDE 70

Two Colours: 3 Tokens

C B A For any time t ≥ 0: Pr(T ≤ t) = Pr(A → B) − Pr(B → A) + Pr(B → C) − Pr(C → B) + Pr(C → A) − Pr(A → C)

slide-71
SLIDE 71

Two Colours: 3 Tokens

C B A For any time t ≥ 0: Pr(T ≤ t) = Pr(A → B) − Pr(B → A) + Pr(B → C) − Pr(C → B) + Pr(C → A) − Pr(A → C)

slide-72
SLIDE 72

Two Colours: 3 Tokens

C B A For any time t ≥ 0: Pr(T ≤ t) = Pr(A → B) − Pr(B → A) + Pr(B → C) − Pr(C → B) + Pr(C → A) − Pr(A → C)

slide-73
SLIDE 73

Two Colours: 3 Tokens

C B A For any time t ≥ 0: Pr(T ≤ t) = Pr(A → B) − Pr(B → A) + Pr(B → C) − Pr(C → B) + Pr(C → A) − Pr(A → C)

slide-74
SLIDE 74

Two Colours: 3 Tokens

C B A For any time t ≥ 0: Pr(T ≤ t) = Pr(A → B) − Pr(B → A) + Pr(B → C) − Pr(C → B) + Pr(C → A) − Pr(A → C) Pr(T ≤ t) in terms of 1-D random walk with two absorbing barriers distributions well-known Principle generalizes to more than 3 tokens.

slide-75
SLIDE 75

We use Mathematics . . .

slide-76
SLIDE 76

Summary

Upper bounds for ET for various initial configurations: arbitrary configuration (improved bound) full configuration (new bound) random configuration (new bound) flip-m configurations (new bound, O(N)) Also considered: asynchronous version synchronous version with passing prob. = 1/2 Main techniques used: limit → Brownian motion two colours Main open question is still: Is the 3-token equilateral conf. the worst case?

slide-77
SLIDE 77

Particle Collision: CERN