Using Erlang for Distributed Simulation for the Derivation of Fault - - PowerPoint PPT Presentation

using erlang for distributed simulation for the
SMART_READER_LITE
LIVE PREVIEW

Using Erlang for Distributed Simulation for the Derivation of Fault - - PowerPoint PPT Presentation

Motivation Theory Erlang Simulation Results Conclusion Using Erlang for Distributed Simulation for the Derivation of Fault Tolerance Measures Nils M ullner August 19, 2008 1 / 28 Motivation Theory Erlang Simulation Results


slide-1
SLIDE 1

Motivation Theory Erlang Simulation Results Conclusion

Using Erlang for Distributed Simulation for the Derivation of Fault Tolerance Measures

Nils M¨ ullner August 19, 2008

1 / 28

slide-2
SLIDE 2

Motivation Theory Erlang Simulation Results Conclusion

Outline

◮ Motivation ◮ Theory ◮ Erlang ◮ Simulation ◮ Conclusion

2 / 28

slide-3
SLIDE 3

Motivation Theory Erlang Simulation Results Conclusion

Motivation

◮ Why Fault Tolerance?

3 / 28

slide-4
SLIDE 4

Motivation Theory Erlang Simulation Results Conclusion

Motivation

◮ Why Fault Tolerance? ◮ Why Simulation?

3 / 28

slide-5
SLIDE 5

Motivation Theory Erlang Simulation Results Conclusion

Motivation

◮ Why Fault Tolerance? ◮ Why Simulation? ◮ Why Erlang?

3 / 28

slide-6
SLIDE 6

Motivation Theory Erlang Simulation Results Conclusion

Fault Tolerance Measures

◮ Reliability, Availability, Safety, Trustworthiness

MTBF MTTR MTTF Fault

  • perational

repairing multiple errors are possible in this period

4 / 28

slide-7
SLIDE 7

Motivation Theory Erlang Simulation Results Conclusion

Fault Tolerance Measures

◮ Reliability, Availability, Safety, Trustworthiness

MTBF MTTR MTTF Fault

  • perational

repairing multiple errors are possible in this period

◮ Essential for Critical Systems

4 / 28

slide-8
SLIDE 8

Motivation Theory Erlang Simulation Results Conclusion

Fault Tolerance Measures

◮ Reliability, Availability, Safety, Trustworthiness

MTBF MTTR MTTF Fault

  • perational

repairing multiple errors are possible in this period

◮ Essential for Critical Systems ◮ Masking, Nonmasking and Failsafe

4 / 28

slide-9
SLIDE 9

Motivation Theory Erlang Simulation Results Conclusion

Fault Tolerance Measures

◮ Reliability, Availability, Safety, Trustworthiness

MTBF MTTR MTTF Fault

  • perational

repairing multiple errors are possible in this period

◮ Essential for Critical Systems ◮ Masking, Nonmasking and Failsafe

◮ Masking: Safety and Liveness ◮ Nonmasking: Liveness ◮ Failsafe: Safety 4 / 28

slide-10
SLIDE 10

Motivation Theory Erlang Simulation Results Conclusion

Simulation

◮ Easy and fast to implement

5 / 28

slide-11
SLIDE 11

Motivation Theory Erlang Simulation Results Conclusion

Simulation

◮ Easy and fast to implement ◮ More accurate than analysis

5 / 28

slide-12
SLIDE 12

Motivation Theory Erlang Simulation Results Conclusion

Simulation

◮ Easy and fast to implement ◮ More accurate than analysis ◮ Extremely scalable

5 / 28

slide-13
SLIDE 13

Motivation Theory Erlang Simulation Results Conclusion

Simulation

◮ Easy and fast to implement ◮ More accurate than analysis ◮ Extremely scalable ◮ Suitable for a large class of problems

5 / 28

slide-14
SLIDE 14

Motivation Theory Erlang Simulation Results Conclusion

Simulation

◮ Easy and fast to implement ◮ More accurate than analysis ◮ Extremely scalable ◮ Suitable for a large class of problems ◮ BUT: Requires (many) resources

5 / 28

slide-15
SLIDE 15

Motivation Theory Erlang Simulation Results Conclusion

Erlang

◮ Distributed

6 / 28

slide-16
SLIDE 16

Motivation Theory Erlang Simulation Results Conclusion

Erlang

◮ Distributed ◮ Concurrent

6 / 28

slide-17
SLIDE 17

Motivation Theory Erlang Simulation Results Conclusion

Erlang

◮ Distributed ◮ Concurrent ◮ Functional

6 / 28

slide-18
SLIDE 18

Motivation Theory Erlang Simulation Results Conclusion

Erlang

◮ Distributed ◮ Concurrent ◮ Functional ◮ λ-calculus [Barendregt and Barendsen, 2000]

6 / 28

slide-19
SLIDE 19

Motivation Theory Erlang Simulation Results Conclusion

Erlang

◮ Distributed ◮ Concurrent ◮ Functional ◮ λ-calculus [Barendregt and Barendsen, 2000] ◮ pure (no side-effects, lazy evaluation) and eager

6 / 28

slide-20
SLIDE 20

Motivation Theory Erlang Simulation Results Conclusion

Functional Languages

◮ Lisp, Haskell, Scheme, Erlang

7 / 28

slide-21
SLIDE 21

Motivation Theory Erlang Simulation Results Conclusion

Functional Languages

◮ Lisp, Haskell, Scheme, Erlang ◮ Often combined with other paradigms (logical, imperative,

  • bject-oriented, constraint, distributed, and concurrent

programming)

7 / 28

slide-22
SLIDE 22

Motivation Theory Erlang Simulation Results Conclusion

Functional Languages

◮ Lisp, Haskell, Scheme, Erlang ◮ Often combined with other paradigms (logical, imperative,

  • bject-oriented, constraint, distributed, and concurrent

programming)

◮ Functions are algorihms

7 / 28

slide-23
SLIDE 23

Motivation Theory Erlang Simulation Results Conclusion

Functional Languages

◮ Lisp, Haskell, Scheme, Erlang ◮ Often combined with other paradigms (logical, imperative,

  • bject-oriented, constraint, distributed, and concurrent

programming)

◮ Functions are algorihms ◮ Algorithms can be splitted into subalgorithms

7 / 28

slide-24
SLIDE 24

Motivation Theory Erlang Simulation Results Conclusion

Functional Languages

◮ Lisp, Haskell, Scheme, Erlang ◮ Often combined with other paradigms (logical, imperative,

  • bject-oriented, constraint, distributed, and concurrent

programming)

◮ Functions are algorihms ◮ Algorithms can be splitted into subalgorithms ◮ Parallelization by modularizing programs

7 / 28

slide-25
SLIDE 25

Motivation Theory Erlang Simulation Results Conclusion

Functional Languages

◮ Lisp, Haskell, Scheme, Erlang ◮ Often combined with other paradigms (logical, imperative,

  • bject-oriented, constraint, distributed, and concurrent

programming)

◮ Functions are algorihms ◮ Algorithms can be splitted into subalgorithms ◮ Parallelization by modularizing programs ◮ Easy to verify

7 / 28

slide-26
SLIDE 26

Motivation Theory Erlang Simulation Results Conclusion

So, what do we want?

◮ Simulation with

8 / 28

slide-27
SLIDE 27

Motivation Theory Erlang Simulation Results Conclusion

So, what do we want?

◮ Simulation with ◮ a Functional Language to

8 / 28

slide-28
SLIDE 28

Motivation Theory Erlang Simulation Results Conclusion

So, what do we want?

◮ Simulation with ◮ a Functional Language to ◮ derive Fault Tolerance Measures

8 / 28

slide-29
SLIDE 29

Motivation Theory Erlang Simulation Results Conclusion

Getting Results with Analytic Methods: Theory

◮ Model Distributed System as Markov Chain

P_1 P_2 P_3

=>

9 / 28

slide-30
SLIDE 30

Motivation Theory Erlang Simulation Results Conclusion

Getting Results with Analytic Methods: Theory

◮ Model Distributed System as Markov Chain

P_1 P_2 P_3

=>

◮ Suffers from state space explosion

9 / 28

slide-31
SLIDE 31

Motivation Theory Erlang Simulation Results Conclusion

Getting Results with Analytic Methods: Theory

◮ Model Distributed System as Markov Chain

P_1 P_2 P_3

=>

◮ Suffers from state space explosion ◮ Solution: Partition state space

9 / 28

slide-32
SLIDE 32

Motivation Theory Erlang Simulation Results Conclusion

Getting Results with Analytic Methods: Theory

◮ Model Distributed System as Markov Chain

P_1 P_2 P_3

=>

◮ Suffers from state space explosion ◮ Solution: Partition state space ◮ Problem: Abstraction hinders accuracy of results derived

tremendously

9 / 28

slide-33
SLIDE 33

Motivation Theory Erlang Simulation Results Conclusion

Theory

◮ Only conservative estimations

10 / 28

slide-34
SLIDE 34

Motivation Theory Erlang Simulation Results Conclusion

Theory

◮ Only conservative estimations ◮ Not even close to reality... (cf. [Dhama et al., 2006])

10 / 28

slide-35
SLIDE 35

Motivation Theory Erlang Simulation Results Conclusion

Theory

◮ Only conservative estimations ◮ Not even close to reality... (cf. [Dhama et al., 2006]) ◮ Size of applicable topologies very limited

10 / 28

slide-36
SLIDE 36

Motivation Theory Erlang Simulation Results Conclusion

Theory

◮ Only conservative estimations ◮ Not even close to reality... (cf. [Dhama et al., 2006]) ◮ Size of applicable topologies very limited ◮ Advantage: results are proven...

10 / 28

slide-37
SLIDE 37

Motivation Theory Erlang Simulation Results Conclusion

Erlang 1/5

◮ Development started in 1986 as Prolog Interpreter at Ericsson

CSLab

11 / 28

slide-38
SLIDE 38

Motivation Theory Erlang Simulation Results Conclusion

Erlang 1/5

◮ Development started in 1986 as Prolog Interpreter at Ericsson

CSLab

◮ A language for programming distributed fault-tolerant soft

real-time non-stop applications.

11 / 28

slide-39
SLIDE 39

Motivation Theory Erlang Simulation Results Conclusion

Erlang 1/5

◮ Development started in 1986 as Prolog Interpreter at Ericsson

CSLab

◮ A language for programming distributed fault-tolerant soft

real-time non-stop applications.

◮ Purely Functional Language

11 / 28

slide-40
SLIDE 40

Motivation Theory Erlang Simulation Results Conclusion

Erlang 1/5

◮ Development started in 1986 as Prolog Interpreter at Ericsson

CSLab

◮ A language for programming distributed fault-tolerant soft

real-time non-stop applications.

◮ Purely Functional Language ◮ Interpreted or compiled

11 / 28

slide-41
SLIDE 41

Motivation Theory Erlang Simulation Results Conclusion

Erlang 1/5

◮ Development started in 1986 as Prolog Interpreter at Ericsson

CSLab

◮ A language for programming distributed fault-tolerant soft

real-time non-stop applications.

◮ Purely Functional Language ◮ Interpreted or compiled ◮ Hot Code Plugging

11 / 28

slide-42
SLIDE 42

Motivation Theory Erlang Simulation Results Conclusion

Erlang 2/5

◮ Focuses on parallelism and fault tolerance

12 / 28

slide-43
SLIDE 43

Motivation Theory Erlang Simulation Results Conclusion

Erlang 2/5

◮ Focuses on parallelism and fault tolerance ◮ Highly reliable (Switch AXD301 is 99.9999999% reliable,

31 ms/yr downtime)

12 / 28

slide-44
SLIDE 44

Motivation Theory Erlang Simulation Results Conclusion

Erlang 2/5

◮ Focuses on parallelism and fault tolerance ◮ Highly reliable (Switch AXD301 is 99.9999999% reliable,

31 ms/yr downtime)

◮ employs OpenSSL (χ2-test)

12 / 28

slide-45
SLIDE 45

Motivation Theory Erlang Simulation Results Conclusion

Erlang 2/5

◮ Focuses on parallelism and fault tolerance ◮ Highly reliable (Switch AXD301 is 99.9999999% reliable,

31 ms/yr downtime)

◮ employs OpenSSL (χ2-test) ◮ No variables => instantiated constants

12 / 28

slide-46
SLIDE 46

Motivation Theory Erlang Simulation Results Conclusion

Erlang 2/5

◮ Focuses on parallelism and fault tolerance ◮ Highly reliable (Switch AXD301 is 99.9999999% reliable,

31 ms/yr downtime)

◮ employs OpenSSL (χ2-test) ◮ No variables => instantiated constants ◮ No loops => recursive function calls

12 / 28

slide-47
SLIDE 47

Motivation Theory Erlang Simulation Results Conclusion

Erlang 2/5

◮ Focuses on parallelism and fault tolerance ◮ Highly reliable (Switch AXD301 is 99.9999999% reliable,

31 ms/yr downtime)

◮ employs OpenSSL (χ2-test) ◮ No variables => instantiated constants ◮ No loops => recursive function calls ◮ No variable declarations => duck types

12 / 28

slide-48
SLIDE 48

Motivation Theory Erlang Simulation Results Conclusion

Erlang 2/5

◮ Focuses on parallelism and fault tolerance ◮ Highly reliable (Switch AXD301 is 99.9999999% reliable,

31 ms/yr downtime)

◮ employs OpenSSL (χ2-test) ◮ No variables => instantiated constants ◮ No loops => recursive function calls ◮ No variable declarations => duck types ◮ Prolog Style Syntax, but not a logic language!

12 / 28

slide-49
SLIDE 49

Motivation Theory Erlang Simulation Results Conclusion

−module(math). −export([fac/1]). fac(N) when N > 0 −> N ∗ fac(N−1); fac(0) −> 1.

13 / 28

slide-50
SLIDE 50

Motivation Theory Erlang Simulation Results Conclusion

−module(pingpong). −export([start/0, ping/2, pong/0]). ping(0, Pong PID) −> Pong PID ! finished, io:format(”ping finished ˜n”, []); ping(N, Pong PID) −> Pong PID ! {ping, self()}, receive pong −> io:format(”Ping received pong˜n”, []) end, ping(N − 1, Pong PID).

14 / 28

slide-51
SLIDE 51

Motivation Theory Erlang Simulation Results Conclusion

pong() −> receive finished −> io:format(”Pong finished˜n”, []); {ping, Ping PID} −> io:format(”Pong received ping˜n”, []), Ping PID ! pong, pong() end. start() −> Pong PID = spawn(pingpong, pong, []), spawn(pingpong, ping, [3, Pong PID]).

15 / 28

slide-52
SLIDE 52

Motivation Theory Erlang Simulation Results Conclusion

Simulation Framework 1/5

◮ monitoring facility (prints every nth step)

16 / 28

slide-53
SLIDE 53

Motivation Theory Erlang Simulation Results Conclusion

Simulation Framework 1/5

◮ monitoring facility (prints every nth step) ◮ runs until desired accuracy is reached (maximal acceptable

deviation within last n turns)

16 / 28

slide-54
SLIDE 54

Motivation Theory Erlang Simulation Results Conclusion

Simulation Framework 1/5

◮ monitoring facility (prints every nth step) ◮ runs until desired accuracy is reached (maximal acceptable

deviation within last n turns)

◮ four distributed self-stabilizing algorithms provided

16 / 28

slide-55
SLIDE 55

Motivation Theory Erlang Simulation Results Conclusion

Simulation Framework 1/5

◮ monitoring facility (prints every nth step) ◮ runs until desired accuracy is reached (maximal acceptable

deviation within last n turns)

◮ four distributed self-stabilizing algorithms provided

◮ Breadth First Search 16 / 28

slide-56
SLIDE 56

Motivation Theory Erlang Simulation Results Conclusion

Simulation Framework 1/5

◮ monitoring facility (prints every nth step) ◮ runs until desired accuracy is reached (maximal acceptable

deviation within last n turns)

◮ four distributed self-stabilizing algorithms provided

◮ Breadth First Search ◮ Depth First Search 16 / 28

slide-57
SLIDE 57

Motivation Theory Erlang Simulation Results Conclusion

Simulation Framework 1/5

◮ monitoring facility (prints every nth step) ◮ runs until desired accuracy is reached (maximal acceptable

deviation within last n turns)

◮ four distributed self-stabilizing algorithms provided

◮ Breadth First Search ◮ Depth First Search ◮ Leader Election 16 / 28

slide-58
SLIDE 58

Motivation Theory Erlang Simulation Results Conclusion

Simulation Framework 1/5

◮ monitoring facility (prints every nth step) ◮ runs until desired accuracy is reached (maximal acceptable

deviation within last n turns)

◮ four distributed self-stabilizing algorithms provided

◮ Breadth First Search ◮ Depth First Search ◮ Leader Election ◮ Mutual Exclusion 16 / 28

slide-59
SLIDE 59

Motivation Theory Erlang Simulation Results Conclusion

Simulation Framework 1/5

◮ monitoring facility (prints every nth step) ◮ runs until desired accuracy is reached (maximal acceptable

deviation within last n turns)

◮ four distributed self-stabilizing algorithms provided

◮ Breadth First Search ◮ Depth First Search ◮ Leader Election ◮ Mutual Exclusion

◮ easy to extend

16 / 28

slide-60
SLIDE 60

Motivation Theory Erlang Simulation Results Conclusion

Simulation Framework 2/5

◮ exact fault environments (specify distinct values for each

vertex and edge)

17 / 28

slide-61
SLIDE 61

Motivation Theory Erlang Simulation Results Conclusion

Simulation Framework 2/5

◮ exact fault environments (specify distinct values for each

vertex and edge)

◮ dynamic fault environments

17 / 28

slide-62
SLIDE 62

Motivation Theory Erlang Simulation Results Conclusion

Simulation Framework 2/5

◮ exact fault environments (specify distinct values for each

vertex and edge)

◮ dynamic fault environments ◮ dynamic execution semantics possible (number of nodes

executing per step in parallel)

17 / 28

slide-63
SLIDE 63

Motivation Theory Erlang Simulation Results Conclusion

Simulation Framework 2/5

◮ exact fault environments (specify distinct values for each

vertex and edge)

◮ dynamic fault environments ◮ dynamic execution semantics possible (number of nodes

executing per step in parallel)

◮ external fault injection and monitoring facilities

17 / 28

slide-64
SLIDE 64

Motivation Theory Erlang Simulation Results Conclusion

Simulation Framework 2/5

◮ exact fault environments (specify distinct values for each

vertex and edge)

◮ dynamic fault environments ◮ dynamic execution semantics possible (number of nodes

executing per step in parallel)

◮ external fault injection and monitoring facilities ◮ event logging (if needed)

17 / 28

slide-65
SLIDE 65

Motivation Theory Erlang Simulation Results Conclusion

Simulation Framework 2/5

◮ exact fault environments (specify distinct values for each

vertex and edge)

◮ dynamic fault environments ◮ dynamic execution semantics possible (number of nodes

executing per step in parallel)

◮ external fault injection and monitoring facilities ◮ event logging (if needed) ◮ choice of schedulers (three provided)

17 / 28

slide-66
SLIDE 66

Motivation Theory Erlang Simulation Results Conclusion

Simulation Framework 2/5

◮ exact fault environments (specify distinct values for each

vertex and edge)

◮ dynamic fault environments ◮ dynamic execution semantics possible (number of nodes

executing per step in parallel)

◮ external fault injection and monitoring facilities ◮ event logging (if needed) ◮ choice of schedulers (three provided) ◮ Load balancing (each client a lightweight process, can be

mapped to any processor/computer)

17 / 28

slide-67
SLIDE 67

Motivation Theory Erlang Simulation Results Conclusion

Simulation Framework 3/5

18 / 28

slide-68
SLIDE 68

Motivation Theory Erlang Simulation Results Conclusion

Simulation Framework 4/5

server client fault_injector client_algorithm client_algorithm_bfs fault_injector_bfs fault_injector_dfs fault_injector_le fault_injector_mutex client_algorithm_dfs client_algorithm_le client_algorithm_mutex matrix_init matrix_init_bfs matrix_init_dfs matrix_init_le matrix_init_mutex server:start(). client:start(). fault_injector:start().

19 / 28

slide-69
SLIDE 69

Motivation Theory Erlang Simulation Results Conclusion

Accuracy 1/2

41 1558 3075 4592 6109 7626 9143 10660121771369415211167281824519762 0.00 0.10 0.20 0.30 0.40 0.50 0.60

# of steps Availability

20,000 10,000

This figure exemplifies availability for first 20, 000 steps of an eight-processor system. The desired accuracy is reached if maximum the deviation within last n steps is lower than a certain

  • threshold. The Results presented in the following feature about

1, 000, 000 steps per system node.

20 / 28

slide-70
SLIDE 70

Motivation Theory Erlang Simulation Results Conclusion

Accuracy 2/2

0.00 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 0.11 0.12 0.13 0.14 0.15 0.16 0.17

10 20 30 40 50 60 70 80 90 100

Insufficiently Strict Accuracy Guards Sufficiently Strict Ac- curacy Guards

Error-Probability for each receiving node and each edge Availability

Strictness of accuracy guards is crucial for reliability of results!

21 / 28

slide-71
SLIDE 71

Motivation Theory Erlang Simulation Results Conclusion

Test Case: All Possible 4-node Graphs

1 2 3 4 5 6 7 8 9 10 11

We chose depth first search (DFS) and breadth first search (BFS) algorithms for comparison with the analytic approach, executed on all possible 4-node graphs.

22 / 28

slide-72
SLIDE 72

Motivation Theory Erlang Simulation Results Conclusion 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 10 20 30 40 50 60 70 80 90 100

Breadth First Search - Simulation

Topology 1 Topology 2 Topology 3 Topology 4 Topology 5 Topology 6 Topology 7 Topology 8 Topology 9 Topology 10 Topology 11

Global Node Error Probabilty Limiting Availability

23 / 28

slide-73
SLIDE 73

Motivation Theory Erlang Simulation Results Conclusion 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 10 20 30 40 50 60 70 80 90 100

Breadth First Search - Analysis

Topology 1 Topology 2 Topology 3 Topology 4 Topology 5 Topology 6 Topology 7 Topology 8 Topology 9 Topology 10 Topology 11

Global Node Error Probability Limiting Availability

24 / 28

slide-74
SLIDE 74

Motivation Theory Erlang Simulation Results Conclusion 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 10 20 30 40 50 60 70 80 90 100

Depth First Search - Simulation

Topology 1 Topology 2 Topology 3 Topology 4 Topology 5 Topology 6 Topology 7 Topology 8 Topology 9 Topology 10 Topology 11

Global Node Error Probability Limiting Availability

25 / 28

slide-75
SLIDE 75

Motivation Theory Erlang Simulation Results Conclusion 0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 10 20 30 40 50 60 70 80 90 100

Depth First Search - Analysis

Topology 1 Topology 2 Topology 3 Topology 4 Topology 5 Topology 6 Topology 7 Topology 8 Topology 9 Topology 10 Topology 11

Global Node Error Probability Limiting Availability

26 / 28

slide-76
SLIDE 76

Motivation Theory Erlang Simulation Results Conclusion

Conclusions

Derivation of fault tolerance measures by simulation

◮ reason: analytic method is insufficient

27 / 28

slide-77
SLIDE 77

Motivation Theory Erlang Simulation Results Conclusion

Conclusions

Derivation of fault tolerance measures by simulation

◮ reason: analytic method is insufficient ◮ method: simulation of self-stabilizing distributed algorithms

27 / 28

slide-78
SLIDE 78

Motivation Theory Erlang Simulation Results Conclusion

Conclusions

Derivation of fault tolerance measures by simulation

◮ reason: analytic method is insufficient ◮ method: simulation of self-stabilizing distributed algorithms ◮ features: modular design, scalability, performance, reliability

  • f results

27 / 28

slide-79
SLIDE 79

Motivation Theory Erlang Simulation Results Conclusion Barendregt, H. and Barendsen, E. (2000). Introduction to lambda calculus. In Aspen¨ as Workshop on Implementation of Functional Languages, G¨

  • teborg. Programming Methodology

Group, University of G¨

  • teborg and Chalmers University of Technology.

Dhama, A., Theel, O., and Warns, T. (2006). Reliability and Availability Analysis of Self-Stabilizing Systems. In 8th International Symposium on Stabilization, Safety, and Security of Distributed Systems, page 17. Springer. Dolev, S. (2000). Self-Stabilization. MIT Press. M¨ ullner, N., Dhama, A., and Theel, O. (2008). Derivation of Fault Tolerance Measures of Self-Stabilizing Algorithms by Simulation. In ANSS ’08: Proceedings of the 41st annual symposium on Simulation, Ottawa, Ontario, Canada. IEEE Computer Society Press. Schneider, M. (1993). Self-stabilization. ACM Comput. Surv., 25(1):45–67. Trivedi, K. S. (1982). Probability and Statistics with Reliability, Queuing and Computer Science Applications. Prentice Hall PTR, Upper Saddle River, NJ, USA. 28 / 28