CS 251 Fall 2019 CS 251 Spring 2020 Principles of Programming - - PowerPoint PPT Presentation

cs 251 fall 2019 cs 251 spring 2020 principles of
SMART_READER_LITE
LIVE PREVIEW

CS 251 Fall 2019 CS 251 Spring 2020 Principles of Programming - - PowerPoint PPT Presentation

CS 251 Fall 2019 CS 251 Spring 2020 Principles of Programming Languages Principles of Programming Languages Ben Wood Ben Wood Concurrency (and Parallelism) https://cs.wellesley.edu/~cs251/s20/ 1 Concurrency Parallelism and


slide-1
SLIDE 1

CS 251 Fall 2019 Principles of Programming Languages

Ben Wood

λ

CS 251 Spring 2020

Principles of Programming Languages

Ben Wood

λ

https://cs.wellesley.edu/~cs251/s20/

Concurrency

(and Parallelism)

Concurrency 1

slide-2
SLIDE 2

Parallelism and Concurrency in 251

  • Goal: encounter

– essence, key concerns – non-sequential thinking – some high-level models – some mid-to-high-level mechanisms

  • Non-goals:

– performance engineering / measurement – deep programming proficiency – exhaustive survey of models and mechanisms

Parallelism 2

slide-3
SLIDE 3

Pa Parallelism Co Concu curre rrency cy

data / work data = resources workers = computations workers = resources di divide ded d among sh share Use more resources to complete work faster. Coordinate access to shared resources. Both can be expressed using a variety of primitives.

Concurrency 3

slide-4
SLIDE 4

Concurrency via Concurrent ML

  • Extends SML with language features for concurrency.
  • Included in SML/NJ and Manticore
  • Model:

– explicitly threaded – synchronous message-passing over channels – first-class events

Concurrency 4

slide-5
SLIDE 5

CML: spawn explicit threads

  • vs. Manticore's "hints" for im

implicit licit parallelism.

val spawn : (unit -> unit) -> thread_id

let fun f () = new thread's work… val t2 = spawn f in this thread's work … end

Concurrency 5

spawn f new thread runs f

time

Thread 1 Thread 2 thread 1 continues workload thunk

slide-6
SLIDE 6

(Aside: different model, fork-join)

Concurrency 6

fork fork fork fork join join join join

fork : (unit -> 'a) -> 'a task "call" a function in a new thread join : 'a task -> 'a wait for it to "return" a result Mainly for explicit ta task par paral allelism

(expressing dependences between tasks),

no not co concu currency cy

(interaction/coordination/cooperation between tasks).

(CML's threads are similar, but cooperation is different.)

slide-7
SLIDE 7

CML: How do threads cooperate?

val spawn : (unit -> unit) -> thread_id

Concurrency 7

How do we pass values in? How do we get results of work out?

workload thunk let val data_in_env = … fun closures_for_the_win x = … val _ = spawn (fn () => map closures_for_the_win data_in_env) in … end

✓ ✓

slide-8
SLIDE 8

CML: How do threads cooperate?

val spawn : (unit -> unit) -> thread_id

Threads co communica cate by passing messages through cha channels.

type ’a chan val recv : ’a chan -> ’a val send : (’a chan * ’a) -> unit

Concurrency 8

How do we get results of work out?

workload thunk

slide-9
SLIDE 9

Tiny channel example

val channel : unit -> ’a chan let val ch : int chan = channel () fun inc () = let val n = recv ch val () = send (ch, n + 1) in exit () end in spawn inc; send (ch, 3); …; recv ch end

Concurrency 9

spa spawn inc inc se send(ch,3) re recv ch ch <s <start inc inc> re recv ch ch se send(ch,4) 3 4

time

slide-10
SLIDE 10

Concurrent streams

fun makeNatStream () = let val ch = channel () fun count i = ( send (ch, i); count (i + 1) ) in spawn (fn () => count 0); ch end fun sum stream 0 acc = acc | sum stream n acc = sum stream (n - 1) (acc + recv stream) val nats = makeNatStream () val sumFirst2 = sum nats 2 0 val sumNext2 = sum nats 2 0

Concurrency 10

time

sp spawn (fn fn()= ()=> co count 0)

re recv st stream re recv st stream

<s <start count unt 0> 0>

se send(ch,0) se send(ch,1) 1 se send(ch,2) se send(ch,3) re recv st stream re recv st stream 2 3

slide-11
SLIDE 11

A common pattern: looping thread

fun forever init f = let fun loop s = loop (f s) in spawn (fn () => loop init); () end

Concurrency 11

slide-12
SLIDE 12

Concurrent streams

fun makeNatStream () = let val ch = channel () in forever 0 (fn i => ( send (ch, i); i + 1)); ch end

Concurrency 12

see cml-sieve.sml, cml-stream.sml

slide-13
SLIDE 13

Event ordering? (1)

fun makeNatStream () = let val ch = channel () fun count i = ( send (ch, i); count (i + 1) ) in spawn (fn () => count 0); ch end val nats = makeNatStream () val _ = spawn (fn () => print ("Green " ^(Int.toString (recv nats)))) val _ = print ("Blue "^(Int.toString (recv nats)))

Concurrency 13

time

sp spawn (fn fn()= ()=> co count 0) <s <start count unt 0> 0>

se send(ch,0) re recv na nats

<s <start fn fn …> …>

re recv na nats se send(ch,1) 1

sp spawn (fn fn()= ()=> pr print…)

slide-14
SLIDE 14

Event ordering? (2)

fun makeNatStream () = let val ch = channel () fun count i = ( send (ch, i); count (i + 1) ) in spawn (fn () => count 0); ch end val nats = makeNatStream () val _ = spawn (fn () => print ("Green " ^(Int.toString (recv nats)))) val _ = print ("Blue "^(Int.toString (recv nats)))

Concurrency 14

time

sp spawn (fn fn()= ()=> co count 0) <s <start count unt 0> 0>

se send(ch,0) re recv na nats

<s <start fn fn …> …>

re recv na nats se send(ch,1) 1

sp spawn (fn fn()= ()=> pr print…)

slide-15
SLIDE 15

Synchronous message passing (CML)

📟 message passing = handshake

receive blocks until a message is sent send blocks until the message received

vs 📭 asy asynchronous message passing

receive blocks until a message has arrived send can finish immediately without blocking

Concurrency 15

slide-16
SLIDE 16

Synchronous message passing (CML)

Concurrency 16

blocked until another thread receives on ch. re recv ch ch se send (ch ch, 1 , 1) se send (ch ch, 0 , 0) re recv ch ch blocked until another thread sends on ch. Thread 1 Thread 2

time

ch ch 📟 ch ch 📟

slide-17
SLIDE 17

Asynchronous message passing (not CML)

Concurrency 17

send does not block re recv ch ch se send (ch ch, 0 , 0) blocked until a thread first sends on ch. Thread 1 Thread 2

time

se send (ch ch, 0 , 0) se send (ch ch, 0 , 0)

📭

re recv ch ch

📭

re recv ch ch ch ch

slide-18
SLIDE 18

First-class events, combinators

Ev Event const structors

val sendEvt : (’a chan * ’a) -> unit event val recvEvt : ’a chan -> ’a event

Ev Event combinators

val sync : ’a event -> ’a val choose : ’a event list -> ’a event val wrap : (’a event * (’a -> ’b)) -> ’b event val select = sync o choose

Concurrency 18

slide-19
SLIDE 19

Utilities

val recv = sync o recvEvt val send = sync o sendEvt fun forever init f = let fun loop s = loop (f s) in spawn (fn () => loop init); () end

Concurrency 19

slide-20
SLIDE 20

Why combinators?

fun makeZipCh (inChA, inChB, outCh) = forever () (fn () => let val (a, b) = select [ wrap (recvEvt inChA, fn a => (a, recv inChB)), wrap (recvEvt inChB, fn b => (recv inChA, b)) ] in send (outCh, (a, b)) end)

Concurrency 20

Re Remember: syn synchron

  • nou
  • us

s (bloc

  • cking)

me messag age-pa passi ssing

slide-21
SLIDE 21

More CML

  • Emulating mutable state via concurrency: cml-cell.sml
  • Dataflow / pipeline computation: cml-sieve.sml
  • Implement futures: cml-futures.sml

Concurrency 21

slide-22
SLIDE 22

Why avoid mutation (of shared data)?

  • For parallelism?
  • For concurrency?

Other models:

Shared-memory multithreading + synchronization …

Concurrency 22

slide-23
SLIDE 23

Shared-Memory Multithreading

pc pc pc

Un Unshared: locals and control Sh Shared: heap and globals Implicit communication through sharing.

slide-24
SLIDE 24

Th Thread ad 1 t1 = bal bal = t1 + 10 Th Thread ad 2 t2 = bal bal = t2 - 10

t1 = bal bal = t1 + 10 t2 = bal bal = t2 - 10

Th Thread ad 1

Concurrency and Race Conditions

Th Thread ad 2

int bal = 0;

bal == 0

slide-25
SLIDE 25

t1 = bal bal = t1 + 10 t2 = bal bal = t2 - 10

Concurrency and Race Conditions

int bal = 0;

bal == -10

Th Thread ad 1 t1 = bal bal = t1 + 10 Th Thread ad 2 t2 = bal bal = t2 - 10

Th Thread ad 1 Th Thread ad 2

slide-26
SLIDE 26

Concurrency and Race Conditions

Th Thread ad 1 synchronized(m) { t1 = bal bal = t1 + 10 } Th Thread ad 2 synchronized(m) { t2 = bal bal = t2 - 10 }

acquire(m) release(m) t2 = bal bal = t2 - 10 t1 = bal bal = t1 + 10 release(m) acquire(m)

Lock m = new Lock(); int bal = 0;

Th Thread ad 1 Th Thread ad 2