Concurrent Programming Romolo Marotta Data Centers and High - - PowerPoint PPT Presentation

concurrent programming
SMART_READER_LITE
LIVE PREVIEW

Concurrent Programming Romolo Marotta Data Centers and High - - PowerPoint PPT Presentation

Concurrent Programming Romolo Marotta Data Centers and High Performance Computing Amdahl LawFixed-size Model (1967) The workload is fixed: it studies how the behaviour of the same program varies when adding more computing power S Amdahl


slide-1
SLIDE 1

Concurrent Programming

Romolo Marotta

Data Centers and High Performance Computing

slide-2
SLIDE 2

Amdahl Law—Fixed-size Model (1967)

  • The workload is fixed: it studies how the behaviour of the same

program varies when adding more computing power SAmdahl = Ts Tp = Ts αTs + (1 − α) Ts

p

= 1 α + (1−α)

p

  • where:

α ∈ [0, 1]: Serial fraction of the program p ∈ N: Number of processors Ts : Serial execution time Tp : Parallel execution time

  • It can be expressed as well vs. the parallel fraction P = 1 − α

2 of 46 - Concurrent Programming

slide-3
SLIDE 3

Fixed-size Model

3 of 46 - Concurrent Programming

slide-4
SLIDE 4

Speed-up According to Amdahl

1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 Speedup Number of Processors Parallel Speedup vs. Serial Fraction Linear α = 0.95 α = 0.8 α = 0.5 α = 0.2 4 of 46 - Concurrent Programming

slide-5
SLIDE 5

How Real is This?

lim

p→∞ =

1 α + (1−α)

p

= 1 α

5 of 46 - Concurrent Programming

slide-6
SLIDE 6

How Real is This?

lim

p→∞ =

1 α + (1−α)

p

= 1 α

  • So if the sequential fraction is 20%, we have:

lim

p→∞ = 1

0.2 = 5

  • Speedup 5 using infinte processors!

5 of 46 - Concurrent Programming

slide-7
SLIDE 7

Gustafson Law—Fixed-time Model (1989)

  • The execution time is fixed: it studies how the behaviour of a

scaled program varies when adding more computing power W ′ = αW + (1 − α)pW SGustafson = W ′ W = α + (1 − α)p

  • where:

α ∈ [0, 1]: Serial fraction of the program p ∈ N: Number of processors W : Original Workload W

′ : Scaled Workload

6 of 46 - Concurrent Programming

slide-8
SLIDE 8

Fixed-time Model

7 of 46 - Concurrent Programming

slide-9
SLIDE 9

Speed-up According to Gustafson

1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 Speedup Number of Processors Parallel Speedup vs. Serial Fraction Linear α = 0.95 α = 0.8 α = 0.5 α = 0.2 8 of 46 - Concurrent Programming

slide-10
SLIDE 10

Amdahl vs. Gustafson—a Driver’s Experience

Amdahl Law:

A car is traveling between two cities 60 Kms away, and has already traveled half the distance at 30 Km/h. No matter how fast you drive the last half, it is impossible to achieve 90 Km/h average speed before reaching the second

  • city. It has already taken you 1 hour and you only have a distance of 60 Kms

total: Going infinitely fast you would only achieve 60 Km/h.

Gustafson Law:

A car has been travelling for some time at less than 90 Km/h. Given enough time and distance to travel, the car’s average speed can always eventually reach 90 Km/h, no matter how long or how slowly it has already traveled. If the car spent one hour at 30 Km/h, it could achieve this by driving at 120 Km/h for two additional hours.

9 of 46 - Concurrent Programming

slide-11
SLIDE 11

Sun, Ni Law—Memory-bounded Model (1993)

  • The workload is scaled, bounded by memory

SSun−Ni = sequential time for Workload W ∗ parallel time for Workload W ∗ = = αW + (1 − α)G(p)W αW + (1 − α)G(p) W

p

= α + (1 − α)G(p) α + (1 − α) G(p)

p

  • where:
  • G(p) describes the workload increase as the memory capacity increases
  • W ∗ = αW + (1 − α)G(p)W

10 of 46 - Concurrent Programming

slide-12
SLIDE 12

Memory-bounded Model

11 of 46 - Concurrent Programming

slide-13
SLIDE 13

Speed-up According to Sun, Ni

SSun−Ni = α + (1 − α)G(p) α + (1 − α) G(p)

p

12 of 46 - Concurrent Programming

slide-14
SLIDE 14

Speed-up According to Sun, Ni

SSun−Ni = α + (1 − α)G(p) α + (1 − α) G(p)

p

  • If G(p) = 1

SAmdahl = 1 α + (1−α)

p

12 of 46 - Concurrent Programming

slide-15
SLIDE 15

Speed-up According to Sun, Ni

SSun−Ni = α + (1 − α)G(p) α + (1 − α) G(p)

p

  • If G(p) = 1

SAmdahl = 1 α + (1−α)

p

  • If G(p) = p

SGustafson = α + (1 − α)p

12 of 46 - Concurrent Programming

slide-16
SLIDE 16

Speed-up According to Sun, Ni

SSun−Ni = α + (1 − α)G(p) α + (1 − α) G(p)

p

  • If G(p) = 1

SAmdahl = 1 α + (1−α)

p

  • If G(p) = p

SGustafson = α + (1 − α)p In general G(p) > p gives a higher scale-up

12 of 46 - Concurrent Programming

slide-17
SLIDE 17

Application Model for Parallel Computers

Fixed-workload model communication bound Memory bound Fixed-time model Fixed-memory model Workload Machine size

13 of 46 - Concurrent Programming

slide-18
SLIDE 18

Scalability

  • Efficiency E =

speed-up number of processors

  • Strong Scalability: If the efficiency is kept fixed while increasing

the number of processes and maintainig fixed the problem size

  • Weak Scalability: If the efficiency is kept fixed while increasing at

the same rate the problem size and the number of processes

14 of 46 - Concurrent Programming

slide-19
SLIDE 19

Superlinear Speedup

  • Can we have a Speed-up > p ?

15 of 46 - Concurrent Programming

slide-20
SLIDE 20

Superlinear Speedup

  • Can we have a Speed-up > p ? Yes!
  • Workload increases more than computing power (G(p) > p)
  • Cache effect: larger accumulated cache size. More or even all of the

working set can fit into caches and the memory access time reduces dramatically

  • RAM effect: enables the dataset to move from disk into RAM

drastically reducing the time required, e.g., to search it.

  • The parallel algorithm uses some search like a random walk: the more

processors that are walking, the less distance has to be walked in total before you reach what you are looking for.

15 of 46 - Concurrent Programming

slide-21
SLIDE 21

Parallel Programming

  • Ad-hoc concurrent programming languages
  • Development Tools
  • Compilers try to optimize the code
  • MPI, OpenMP, Libraries...
  • Tools to ease the task of debugging parallel code (gdb, valgrind, ...)
  • Writing parallel code is for artists, not scientists!
  • There are approaches, not prepackaged solutions
  • Every machine has its own singularities
  • Every problem to face has different requisites
  • The most efficient parallel algorithm is not the most intuitive one

16 of 46 - Concurrent Programming

slide-22
SLIDE 22

Ad-hoc languages

Ada Alef ChucK Clojure Curry Cω E Eiffel Erlang Go Java Julia Joule Limbo Occam Orc Oz Pict Rust SALSA Scala SequenceL SR Unified Parallel C XProc

17 of 46 - Concurrent Programming

slide-23
SLIDE 23

Classical Approach to Concurrent Programming

  • Based on blocking primitives
  • Semaphores
  • Locks acquiring
  • . . .

PRODUCER

Semaphore p, c = 0; Buffer b; while(1) { <Write on b> signal(p); wait(c); }

CONSUMER

Semaphore p, c = 0; Buffer b; while(1) { wait(p); <Read from b> signal(c); }

18 of 46 - Concurrent Programming

slide-24
SLIDE 24

Parallel Programs Properties

  • Safety: nothing wrong happens
  • It’s called Correctness as well

19 of 46 - Concurrent Programming

slide-25
SLIDE 25

Parallel Programs Properties

  • Safety: nothing wrong happens
  • It’s called Correctness as well
  • Liveness: eventually something good happens
  • It’s called Progress as well

19 of 46 - Concurrent Programming

slide-26
SLIDE 26

Correctness

  • What does it mean for a program to be correct?
  • What’s exactly a concurrent FIFO queue?
  • FIFO implies a strict temporal ordering
  • Concurrent implies an ambiguous temporal ordering
  • Intuitively, if we rely on locks, changes happen in a non-interleaved

fashion, resembling a sequential execution

  • We can say a concurrent execution is correct only because we can

associate it with a sequential one, which we know the functioning

  • f
  • A concurrent execution is correct if it is equivalent to a correct

sequential execution

20 of 46 - Concurrent Programming

slide-27
SLIDE 27

A simplyfied model of a concurrent system

  • A concurrent system is a collection of sequential threads that

communicate through shared data structures called objects.

  • An object has a unique name and a set of primitive operations.
  • An invocation of an operation op of the object x is written as

A op(args*) x where A is the invoking thread and args∗ the sequence of arguments A

  • A response to an operation invocation on x is written as

A ret(res*) x where A is the invoking thread and res∗ the sequence of results

21 of 46 - Concurrent Programming

slide-28
SLIDE 28

A simplyfied model of a concurrent execution

  • A history is a sequence of invocations and replies generated on an
  • bject by a set of threads

22 of 46 - Concurrent Programming

slide-29
SLIDE 29

A simplyfied model of a concurrent execution

  • A history is a sequence of invocations and replies generated on an
  • bject by a set of threads
  • A sequential history is a history where all the invocations have an

immediate response Sequential H’: A op() x A ret() x B op() x B ret() x A op() y A ret() y

22 of 46 - Concurrent Programming

slide-30
SLIDE 30

A simplyfied model of a concurrent execution

  • A history is a sequence of invocations and replies generated on an
  • bject by a set of threads
  • A sequential history is a history where all the invocations have an

immediate response

  • A concurrent history is a history that is not sequential

Sequential H’: A op() x A ret() x B op() x B ret() x A op() y A ret() y Concurrent H: A op() x B op() x A ret() x A op() y B ret() x A ret() y

22 of 46 - Concurrent Programming

slide-31
SLIDE 31

A simplyfied model of a concurrent execution (2)

  • A process subhistory H|P of a history H is the subsequence of all

events in H whose process names are P H: A op() x B op() x A ret() x A op() y B ret() x A ret() y

23 of 46 - Concurrent Programming

slide-32
SLIDE 32

A simplyfied model of a concurrent execution (2)

  • A process subhistory H|P of a history H is the subsequence of all

events in H whose process names are P H: A op() x A ret() x A op() y A ret() y

23 of 46 - Concurrent Programming

slide-33
SLIDE 33

A simplyfied model of a concurrent execution (2)

  • A process subhistory H|P of a history H is the subsequence of all

events in H whose process names are P H: A op() x B op() x A ret() x A op() y B ret() x A ret() y H|A: A op() x A ret() x A op() y A ret() y

23 of 46 - Concurrent Programming

slide-34
SLIDE 34

A simplyfied model of a concurrent execution (2)

  • A process subhistory H|P of a history H is the subsequence of all

events in H whose process names are P H: A op() x B op() x A ret() x A op() y B ret() x A ret() y H|A: A op() x A ret() x A op() y A ret() y

  • Process subhistories are always sequential

23 of 46 - Concurrent Programming

slide-35
SLIDE 35

A simplyfied model of a concurrent execution (3)

  • An object subhistory H|x of a history H is the subsequence of all

events in H whose object names are x H: A op() x B op() x A ret() x A op() y B ret() x A ret() y

24 of 46 - Concurrent Programming

slide-36
SLIDE 36

A simplyfied model of a concurrent execution (3)

  • An object subhistory H|x of a history H is the subsequence of all

events in H whose object names are x H: A op() x B op() x A ret() x B ret() x

24 of 46 - Concurrent Programming

slide-37
SLIDE 37

A simplyfied model of a concurrent execution (3)

  • An object subhistory H|x of a history H is the subsequence of all

events in H whose object names are x H: A op() x B op() x A ret() x A op() y B ret() x A ret() y H|x: A op() x B op() x A ret() x B ret() x

24 of 46 - Concurrent Programming

slide-38
SLIDE 38

A simplyfied model of a concurrent execution (3)

  • An object subhistory H|x of a history H is the subsequence of all

events in H whose object names are x H: A op() x B op() x A ret() x A op() y B ret() x A ret() y H|x: A op() x B op() x A ret() x B ret() x

  • Object subhistories are not necessarily sequential

24 of 46 - Concurrent Programming

slide-39
SLIDE 39

Equivalence between histories

  • Two histories H and H′ are equivalent if for every process P,

H|P = H′|P H: A op() x B op() x A ret() x A op() y B ret() x A ret() y H’: B op() x B ret() x A op() x A ret() x A op() y A ret() y

25 of 46 - Concurrent Programming

slide-40
SLIDE 40

Equivalence between histories

  • Two histories H and H′ are equivalent if for every process P,

H|P = H′|P H: A op() x A ret() x A op() y A ret() y H’: A op() x A ret() x A op() y A ret() y

25 of 46 - Concurrent Programming

slide-41
SLIDE 41

Equivalence between histories

  • Two histories H and H′ are equivalent if for every process P,

H|P = H′|P H: A op() x A ret() x A op() y A ret() y H’: A op() x A ret() x A op() y A ret() y H|A: H’|A: A op() x A ret() x A op() y A ret() y

25 of 46 - Concurrent Programming

slide-42
SLIDE 42

Equivalence between histories

  • Two histories H and H′ are equivalent if for every process P,

H|P = H′|P H: A op() x B op() x A ret() x A op() y B ret() x A ret() y H’: B op() x B ret() x A op() x A ret() x A op() y A ret() y H|A: H’|A: A op() x A ret() x A op() y A ret() y

25 of 46 - Concurrent Programming

slide-43
SLIDE 43

Equivalence between histories

  • Two histories H and H′ are equivalent if for every process P,

H|P = H′|P H: B op() x B ret() x H’: B op() x B ret() x H|A: H’|A: A op() x A ret() x A op() y A ret() y

25 of 46 - Concurrent Programming

slide-44
SLIDE 44

Equivalence between histories

  • Two histories H and H′ are equivalent if for every process P,

H|P = H′|P H: B op() x B ret() x H’: B op() x B ret() x H|A: H’|A: A op() x A ret() x A op() y A ret() y H|B: H’|B: B op() x B ret() x

25 of 46 - Concurrent Programming

slide-45
SLIDE 45

Equivalence between histories

  • Two histories H and H′ are equivalent if for every process P,

H|P = H′|P H: A op() x B op() x A ret() x A op() y B ret() x A ret() y H’: B op() x B ret() x A op() x A ret() x A op() y A ret() y H|A: H’|A: A op() x A ret() x A op() y A ret() y H|B: H’|B: B op() x B ret() x

25 of 46 - Concurrent Programming

slide-46
SLIDE 46

Correctness Conditions

  • A concurrent execution is correct if it is equivalent to a correct

sequential execution ⇒ A history is correct if it is equivalent to a sequential history which satisfies a set of correctness criteria

  • A correctness condition specifies the set of correctness criteria

⇒ In order to implement correctly a concurrent object wrt a correctness condition, a programmer have to guarantee that every possible history on his implementation satisfies the correctness criteria

26 of 46 - Concurrent Programming

slide-47
SLIDE 47

Sequential Consistency [Lamport 1970]

  • A history is sequentially consistent if it is equivalent to a sequential

history which is correct according to the sequential definition of the

  • bjects
  • An object is sequentially consistent if every valid history associated

with its usage is sequentially consistent

27 of 46 - Concurrent Programming

slide-48
SLIDE 48

Sequential Consistency [Lamport 1970] (Example 1)

  • x is a FIFO queue with Enqueue (Enq) and Dequeue (Deq)
  • perations

28 of 46 - Concurrent Programming

slide-49
SLIDE 49

Sequential Consistency [Lamport 1970] (Example 1)

  • x is a FIFO queue with Enqueue (Enq) and Dequeue (Deq)
  • perations
  • Is the history H sequentially consistent?

H: A Enq(1) x A ret() x B Enq(2) x B ret() x B Deq() x B ret(2) x

28 of 46 - Concurrent Programming

slide-50
SLIDE 50

Sequential Consistency [Lamport 1970] (Example 1)

  • x is a FIFO queue with Enqueue (Enq) and Dequeue (Deq)
  • perations
  • Is the history H sequentially consistent? Yes!

H: A Enq(1) x A ret() x B Enq(2) x B ret() x B Deq() x B ret(2) x H’: B Enq(2) x B ret() x A Enq(1) x A ret() x B Deq() x B ret(2) x

28 of 46 - Concurrent Programming

slide-51
SLIDE 51

Sequential Consistency [Lamport 1970] (Example 2)

H: 1. A Enq(1) x 2. A ret() x 3. A Enq(1) y 4. A ret() y 5. B Enq(2) y 6. B ret() y 7. B Enq(2) x 8. B ret() x 9. A Deq() x 10. A ret(2) x 11. B Deq() y 12. B ret(1) y

29 of 46 - Concurrent Programming

slide-52
SLIDE 52

Sequential Consistency [Lamport 1970] (Example 2)

H: 1. A Enq(1) x 2. A ret() x 3. A Enq(1) y 4. A ret() y 5. B Enq(2) y 6. B ret() y 7. B Enq(2) x 8. B ret() x 9. A Deq() x 10. A ret(2) x 11. B Deq() y 12. B ret(1) y H|x: A Enq(1) x A ret() x B Enq(2) x B ret() x A Deq() x A ret(2) x

29 of 46 - Concurrent Programming

slide-53
SLIDE 53

Sequential Consistency [Lamport 1970] (Example 2)

H: 1. A Enq(1) x 2. A ret() x 3. A Enq(1) y 4. A ret() y 5. B Enq(2) y 6. B ret() y 7. B Enq(2) x 8. B ret() x 9. A Deq() x 10. A ret(2) x 11. B Deq() y 12. B ret(1) y H|x: A Enq(1) x A ret() x B Enq(2) x B ret() x A Deq() x A ret(2) x H|y: A Enq(1) y A ret() y B Enq(2) y B ret() y B Deq() y B ret(1) y

29 of 46 - Concurrent Programming

slide-54
SLIDE 54

Sequential Consistency [Lamport 1970] (Example 2)

  • The composition of sequentially consistent histories is not

necessarily sequential consistent

H: 1. A Enq(1) x 2. A ret() x 3. A Enq(1) y 4. A ret() y 5. B Enq(2) y 6. B ret() y 7. B Enq(2) x 8. B ret() x 9. A Deq() x 10. A ret(2) x 11. B Deq() y 12. B ret(1) y H|x: A Enq(1) x A ret() x B Enq(2) x B ret() x A Deq() x A ret(2) x H|y: A Enq(1) y A ret() y B Enq(2) y B ret() y B Deq() y B ret(1) y

29 of 46 - Concurrent Programming

slide-55
SLIDE 55

Linearizability [Herlihy 1990]

  • A concurrent execution is linearizable if:
  • Each procedure appears to be executed in an indivisible point

(linearization point between its invocation and completition

  • The order among those points is correct according to the sequential

definition of objects

30 of 46 - Concurrent Programming

slide-56
SLIDE 56

Linearizability [Herlihy 1990] (Example 1)

A B

Enq(1) Enq(2) Deq(2)

31 of 46 - Concurrent Programming

slide-57
SLIDE 57

Linearizability [Herlihy 1990] (Example 1)

A B

Enq(1) Enq(2) Deq(2)

31 of 46 - Concurrent Programming

slide-58
SLIDE 58

Linearizability [Herlihy 1990] (Example 1)

A B

Enq(1) Enq(2) Deq(2)

31 of 46 - Concurrent Programming

slide-59
SLIDE 59

Linearizability [Herlihy 1990] (Example 1)

A B

Enq(1) Enq(2) Deq(2)

31 of 46 - Concurrent Programming

slide-60
SLIDE 60

Linearizability [Herlihy 1990] (Example 1)

A B

Enq(1) Enq(2) Deq(1)

31 of 46 - Concurrent Programming

slide-61
SLIDE 61

Linearizability [Herlihy 1990] (Example 1)

A B

Enq(1) Enq(2) Deq(1)

31 of 46 - Concurrent Programming

slide-62
SLIDE 62

Linearizability [Herlihy 1990] (2)

  • A history H is linearizable if it is equivalent to sequential history S

such that:

  • S is correct according to the sequential definition of objects (H is

sequential consistent)

  • If a response precedes an invocation in the original history, then it

must precede it in the sequential one as well

  • An object is linearizable if every valid history associated with its

usage can be linearized

32 of 46 - Concurrent Programming

slide-63
SLIDE 63

Linearizability [Herlihy 1990] (Example 2)

  • Is the history H is linearizable?

H: A Enq(1) x A ret() x B Enq(2) x B ret() x B Deq() x B ret(2) x

33 of 46 - Concurrent Programming

slide-64
SLIDE 64

Linearizability [Herlihy 1990] (Example 2)

  • Is the history H is linearizable? No!

H: A Enq(1) x A ret() x B Enq(2) x B ret() x B Deq() x B ret(2) x

33 of 46 - Concurrent Programming

slide-65
SLIDE 65

Linearizability [Herlihy 1990] (Example 2)

  • Is the history H′ is linearizable?

H: A Enq(1) x B Enq(2) x A ret() x B ret() x B Deq() x B ret(2) x

34 of 46 - Concurrent Programming

slide-66
SLIDE 66

Linearizability [Herlihy 1990] (Example 2)

  • Is the history H′ is linearizable? Yes!

H: A Enq(1) x B Enq(2) x A ret() x B ret() x B Deq() x B ret(2) x H’: B Enq(2) x B ret() x A Enq(1) x A ret() x B Deq() x B ret(2) x

34 of 46 - Concurrent Programming

slide-67
SLIDE 67

Linearizability Properties

  • Linearizability requires:
  • Correctness with objects semantic (as Sequential Consistency)
  • Real-time order
  • Linearizability ⇒ Sequential Consistency
  • The composition of linearizable histories is still linearizable

35 of 46 - Concurrent Programming

slide-68
SLIDE 68

Quick look on transaction correctness conditions

  • We can see a transaction as a set of procedures on different object

that has to appear as atomic

  • Serializability requires that transactions appear to execute

sequentially, i.e., without interleaving.

  • A sort of sequential consistency for multi-object atomic procedures
  • Strict-Serializability requires the transactions’ order in the

sequential history is compatible with their precedence order

  • A sort of linearizability for multi-object atomic procedures

36 of 46 - Concurrent Programming

slide-69
SLIDE 69

Quick look on transaction correctness conditions (2)

Serializability Strict Serializability Sequential Consistency Linearizability

37 of 46 - Concurrent Programming

slide-70
SLIDE 70

Correctness Conditions (Incomplete) Taxonomy

Sequential Consistency Linearizability Serializability Strict Serializability Equivalent to a sequential order Y Y Y Y Respects program

  • rder in each thread

Y Y Y Y Consistent with real-time ordering

  • Y
  • Y

Can touch multiple

  • bjects atomically
  • Y

Y Locality

  • Y
  • 38 of 46 - Concurrent Programming
slide-71
SLIDE 71

Progress Conditions

  • Deadlock-free:

Some thread acquires a lock eventually

  • Starvation-free:

Every thread acquires a lock eventually

  • Lock-free:

Some method call completes

  • Wait-free:

Every method call completes

  • Obstruction-free:

Every method call completes, if they execute in isolation

39 of 46 - Concurrent Programming

slide-72
SLIDE 72

Maximum and Minimum Progress

  • Minimum Progress:
  • Some method call completes eventually
  • Maximum Progress:
  • Every method call completes eventually

40 of 46 - Concurrent Programming

slide-73
SLIDE 73

Maximum and Minimum Progress

  • Minimum Progress:
  • Some method call completes eventually
  • Maximum Progress:
  • Every method call completes eventually
  • Progress is a per-method property:
  • A real data structure can combine blocking and wait-free methods
  • For example, the Java Concurrency Package:
  • Skiplists
  • Hash Tables
  • Exchangers

40 of 46 - Concurrent Programming

slide-74
SLIDE 74

Progress Taxonomy

Non-Blocking Blocking For everyone Wait-free Obstruction- Free Starvation- Free For some Lock-free Deadlock- free

41 of 46 - Concurrent Programming

slide-75
SLIDE 75

Scheduler’s Role

Progress conditions on multiprocessors:

  • Are not about guarantees provided by a method implementation
  • Are about the scheduling support needed to provide maximum of

minimum progress

42 of 46 - Concurrent Programming

slide-76
SLIDE 76

Scheduler Requirements

Non-Blocking Blocking For everyone Wait-free Obstruction- Free Starvation- Free For some Lock-free Deadlock- free

43 of 46 - Concurrent Programming

slide-77
SLIDE 77

Scheduler Requirements

Non-Blocking Blocking For everyone Nothing Thread exe- cutes alone No thread locked in CS For some Nothing No thread locked in CS

43 of 46 - Concurrent Programming

slide-78
SLIDE 78

Dependent Progress

  • A progress condition is said dependent if maximum (or minimum)

progress requires scheduler support

  • Otherwise it is called independent

44 of 46 - Concurrent Programming

slide-79
SLIDE 79

Dependent Progress

  • A progress condition is said dependent if maximum (or minimum)

progress requires scheduler support

  • Otherwise it is called independent
  • Progress conditions are therefore not about guarantees provided by

the implementations

  • Programmers develop lock-free, obstruction-free or deadlock-free

algorithms implicitly assuming that modern schedulers are benevolent, and that therefore every method call will eventually complete, as they were wait-free

44 of 46 - Concurrent Programming

slide-80
SLIDE 80

Progress Taxonomy

Non-Blocking Blocking For everyone Wait-free Obstruction- Free Starvation- Free For some Lock-free Deadlock- free

45 of 46 - Concurrent Programming

slide-81
SLIDE 81

Progress Taxonomy

Non-Blocking Blocking For everyone Wait-free Obstruction- Free Starvation- Free For some Lock-free Clash-Free Deadlock- free

45 of 46 - Concurrent Programming

slide-82
SLIDE 82

Progress Taxonomy

Non-Blocking Blocking For everyone Wait-free Obstruction- Free Starvation- Free For some Lock-free Clash-Free Deadlock- free

  • The Einsteinium of progress conditions: it does not exists in nature

and has no value

  • It is known that clash freedom is a strictly weaker property than
  • bstruction freedom

45 of 46 - Concurrent Programming

slide-83
SLIDE 83

Concurrent Data Structures

  • Developing data structures which can be concurrently accessed by

more threads can significantly increase programs’ performance

  • Synchronization primitives must be avoided
  • Result’s correctness must be guaranteed (recall linearizability)
  • We can rely on atomic operations provided by computer

architectures

46 of 46 - Concurrent Programming