Seminars in Advanced Topics in Computer Science Engineering 2019/2020
Romolo Marotta
Concurrent and parallel programming Seminars in Advanced Topics in - - PowerPoint PPT Presentation
Concurrent and parallel programming Seminars in Advanced Topics in Computer Science Engineering 2019/2020 Romolo Marotta Trend in processor technology Concurrent and parallel programming 2 Blocking synchronization SHARED RESOURCE
Seminars in Advanced Topics in Computer Science Engineering 2019/2020
Romolo Marotta
Concurrent and parallel programming 2
Concurrent and parallel programming 3
SHARED RESOURCE
Concurrent and parallel programming 4
…zZz… SHARED RESOURCE
Liveness might be impaired due to the arbitration of accesses
Concurrent and parallel programming 5
…zZz… SHARED RESOURCE Correctness guaranteed by mutual exclusion Performance might be hampered because
intuitive one
Concurrent and parallel programming 6
their sequential implementations
we want that its parallel alternative also completes with the same input
Concurrent and parallel programming 7
Concurrent and parallel programming 8
interleaved fashion, resembling a sequential execution
we can associate it with a sequential one, which we know the functioning of
sequential execution
Concurrent and parallel programming 9
Concurrent and parallel programming 10
sequential execution execution
threads/processes that communicate through shared data structures called objects.
Concurrent and parallel programming 11
generated on an object by a set of threads
Concurrent and parallel programming 12
A op(args*) x
thread id
method name list of parameters
A ret(res*) x
list of returned values reply token
have an immediate response
Concurrent and parallel programming 13
Sequential H’: A op() x A ret() x B op() x B ret() x A op() y A ret() y Concurrent H: A op() x B op() x A ret() x A op() y B ret() x A ret() y
Concurrent and parallel programming 14
A history is correct if it is to a correct sequential history equivalent
sequential execution execution
Concurrent and parallel programming 15
Concurrent and parallel programming 16
H: A op() x B op() x A ret() x A op() y B ret() x A ret() y
Concurrent and parallel programming 17
H: A op() x A ret() x A op() y A ret() y
Concurrent and parallel programming 18
H|A: A op() x A ret() x A op() y A ret() y H: A op() x A ret() x A op() y A ret() y
Concurrent and parallel programming 19
H|A: A op() x A ret() x A op() y A ret() y
H: A op() x A ret() x A op() y A ret() y
H|P=H’|P
Concurrent and parallel programming 20
H: A op() x B op() x A ret() x A op() y B ret() x A ret() y H’: B op() x B ret() x A op() x A ret() x A op() y A ret() y
H|P=H’|P
Concurrent and parallel programming 21
H: A op() x A ret() x A op() y A ret() y H’: A op() x A ret() x A op() y A ret() y
H|P=H’|P
Concurrent and parallel programming 22
H|A: H’|A: A op() x A ret() x A op() y A ret() y H: A op() x A ret() x A op() y A ret() y H’: A op() x A ret() x A op() y A ret() y
H|P=H’|P
Concurrent and parallel programming 23
H: A op() x B op() x A ret() x A op() y B ret() x A ret() y H|A: H’|A: A op() x A ret() x A op() y A ret() y H’: B op() x B ret() x A op() x A ret() x A op() y A ret() y
H|P=H’|P
Concurrent and parallel programming 24
H|A: H’|A: A op() x A ret() x A op() y A ret() y H: B op() x B ret() x H’: B op() x B ret() x
H|P=H’|P
Concurrent and parallel programming 25
H|A: H’|A: A op() x A ret() x A op() y A ret() y H|B: H’|B: B op() x B ret() x H: B op() x B ret() x H’: B op() x B ret() x
H|P=H’|P
Concurrent and parallel programming 26
H: A op() x B op() x A ret() x A op() y B ret() x A ret() y H|A: H’|A: A op() x A ret() x A op() y A ret() y H’: B op() x B ret() x A op() x A ret() x A op() y A ret() y H|B: H’|B: B op() x B ret() x H: B op() x B ret() x H’: B op() x B ret() x
correct
Concurrent and parallel programming 27
sequential execution concurrent execution A history is correct if it is to a correct sequential history equivalent which satisfies a given correctness condition
considered as reference In order to implement correctly a concurrent object wrt a correctness condition, we must guarantee that every possible history on our implementation satisfies the correctness condition
An object implementation is sequentially consistent if every history associated with its usage is sequentially consistent
Concurrent and parallel programming 28
Concurrent and parallel programming 29
Enq(1)
Enq(2) Deq(2)
A Enq(1) x A ret() x B Enq(2) x B ret() x B Deq(2) x B ret() x
Concurrent and parallel programming 30
A Enq(1) x A ret() x B Enq(2) x B ret() x B Deq(2) x B ret() x H:
Concurrent and parallel programming 31
A Enq(1) x A ret() x B Enq(2) x B ret() x B Deq(2) x B ret() x H: A Enq(1) x A ret() x H|A: B Enq(2) x B ret() x B Deq(2) x B ret() x H|B:
Concurrent and parallel programming 32
A Enq(1) x A ret() x B Enq(2) x B ret() x B Deq(2) x B ret() x H: A Enq(1) x A ret() x H|A: B Enq(2) x B ret() x B Deq(2) x B ret() x H|B: B Enq(2) x B ret() x A Enq(1) x A ret() x B Deq(2) x B ret() x H’:
Concurrent and parallel programming 33
A Enq(1) x A ret() x B Enq(2) x B ret() x B Deq(2) x B ret() x H: B Enq(2) x B ret() x A Enq(1) x A ret() x B Deq(2) x B ret() x H’:
(linearization point) between its invocation and completion
sequential definition of objects
Concurrent and parallel programming 34
Concurrent and parallel programming 35
Enq(1)
Enq(2) Deq(2)
Concurrent and parallel programming 36
Enq(1)
Enq(2) Deq(2)
Concurrent and parallel programming 37
Enq(1)
Enq(2) Deq(2)
Concurrent and parallel programming 38
Enq(1)
Enq(2) Deq(2)
Concurrent and parallel programming 39
Enq(1)
Enq(2) Deq(2)
Concurrent and parallel programming 40
Enq(1)
Enq(2) Deq(2)
Concurrent and parallel programming 41
Enq(1)
Enq(2) Deq(2)
Concurrent and parallel programming 42
Enq(1)
Enq(2) Deq(2)
history, then it must precede it in the sequential one as well An object implementation is linearizable if every history associated with its usage can be linearized
Concurrent and parallel programming 43
composition)
Concurrent and parallel programming 44
different object that has to appear as atomic
sequentially, i.e., without interleaving.
procedures
sequential history is compatible with their precedence
Concurrent and parallel programming 45
Strict Serializability
Concurrent and parallel programming 46
Serializability Sequential Consistency Linearizability
Strict Serializability
Concurrent and parallel programming 47
Serializability Sequential Consistency Linearizability Opacity
These predicate only on committed transactions It restricts also aborted transactions (required for Transactional Memory)
Sequential Consistency Linearizability Serializability Strict Serializability Equivalent to a sequential order Respects program order in each thread Consistent with real-time ordering Access multiple objects atomically Locality
Concurrent and parallel programming 48
Concurrent and parallel programming 49
Concurrent and parallel programming 50
Concurrent and parallel programming 51
…zZz… SHARED RESOURCE The scheduler should guarantee that the thread holding the lock completes its critical section
Progress conditions on multiprocessors
implementation
progress Requirement for lock-based applications
Every thread takes an infinite number of concrete steps
Concurrent and parallel programming 52
Concurrent and parallel programming 53
Concurrent and parallel programming 54
Non-blocking Blocking For everyone Wait freedom Obstruction freedom Starvation freedom For someone Lock freedom Deadlock freedom
Concurrent and parallel programming 55
Independent Dependent Non-blocking Blocking For everyone
isolation Fairness For someone
Concurrent and parallel programming 56
Independent Dependent Non-blocking Blocking For everyone Wait freedom Obstruction freedom Starvation freedom For someone Lock freedom Clash freedom Deadlock freedom
Concurrent and parallel programming 57
Independent Dependent Non-blocking Blocking For everyone Wait freedom Obstruction freedom Starvation freedom For someone Lock freedom Clash freedom Deadlock freedom
Concurrent and parallel programming 58
and (maybe) has no “commercial” value
Concurrent and parallel programming 59
Concurrent and parallel programming 60
…zZz… SHARED RESOURCE
Concurrent and parallel programming 61
1x 2x 4x 1.5x 1.8x
Concurrent and parallel programming 62
same program varies when adding more computing power 𝑇𝐵𝑛𝑒𝑏ℎ𝑚 = 𝑈
𝑡
𝑈
𝑞
= 𝑈
𝑡
𝛽𝑈
𝑡 + (1 − 𝛽) 𝑈 𝑡
𝑞 = 1 𝛽 + (1 − 𝛽) 𝑞
𝑡 : Serial execution time
𝑞 : Parallel execution time
𝑄 = 1 − 𝛽
Concurrent and parallel programming 63
Concurrent and parallel programming 64
lim
𝑞→∞ 𝑇𝐵𝑛𝑒𝑏ℎ𝑚 = lim 𝑞→∞
1 𝛽 + (1 − 𝛽) 𝑞 = 1 𝛽
lim
𝑞→∞ 𝑇𝐵𝑛𝑒𝑏ℎ𝑚 = 1 0.2 =5
Concurrent and parallel programming 65
Concurrent and parallel programming 66
the scaled program varies when adding more computing power 𝑋′ = 𝛽𝑋 + 1 − 𝛽 𝑞𝑋 𝑇𝐻𝑣𝑡𝑢𝑏𝑔𝑡𝑝𝑜 = 𝑋′ 𝑋 = 𝛽 + 1 − 𝛽 𝑞
Concurrent and parallel programming 67
Concurrent and parallel programming 68
Concurrent and parallel programming 69
𝑇𝑇𝑣𝑜−𝑂𝑗 = 𝑡𝑓𝑟𝑣𝑓𝑜𝑢𝑗𝑏𝑚 𝑢𝑗𝑛𝑓 𝑔𝑝𝑠 𝑋∗ 𝑞𝑏𝑠𝑏𝑚𝑚𝑓𝑚 𝑢𝑗𝑛𝑓 𝑔𝑝𝑠 𝑋∗ 𝑇𝑇𝑣𝑜−𝑂𝑗 = 𝛽𝑋 + 1 − 𝛽 𝐻 𝑞 𝑋 𝛽𝑋 + 1 − 𝛽 𝐻 𝑞 𝑋 𝑞 = 𝛽 + 1 − 𝛽 𝐻 𝑞 𝛽 + 1 − 𝛽 𝐻 𝑞 𝑞
increases
Concurrent and parallel programming 70
𝑇𝑇𝑣𝑜−𝑂𝑗 = 𝛽 + 1 − 𝛽 𝐻 𝑞 𝛽 + 1 − 𝛽 𝐻 𝑞 𝑞
𝑇𝐵𝑛𝑒𝑏ℎ𝑚 = 1 𝛽 + 1 − 𝛽 𝑞
𝑇𝐻𝑣𝑡𝑢𝑏𝑔𝑡𝑝𝑜 = 𝛽 + 1 − 𝛽 𝑞
Concurrent and parallel programming 71
the working set can fit into caches and the memory access time reduces dramatically
drastically reducing the time required, e.g., to search it.
Concurrent and parallel programming 72
Yes!
𝐹 = 𝑡𝑞𝑓𝑓𝑒𝑣𝑞 #𝑞𝑠𝑝𝑑𝑓𝑡𝑡𝑝𝑠𝑡
number of processes and maintain fixed the problem size
increasing at the same rate the problem size and the number of processes
Concurrent and parallel programming 73
Concurrent and parallel programming 74