cs 251 fall 2019 cs 251 fall 2019 principles of
play

CS 251 Fall 2019 CS 251 Fall 2019 Principles of Programming - PowerPoint PPT Presentation

CS 251 Fall 2019 CS 251 Fall 2019 Principles of Programming Languages Principles of Programming Languages Ben Wood Ben Wood Concurrency (and Parallelism) https://cs.wellesley.edu/~cs251/f19/ 1 Concurrency Parallelism and Concurrency


  1. λ λ CS 251 Fall 2019 CS 251 Fall 2019 Principles of Programming Languages Principles of Programming Languages Ben Wood Ben Wood Concurrency (and Parallelism) https://cs.wellesley.edu/~cs251/f19/ 1 Concurrency

  2. Parallelism and Concurrency in 251 • Goal: encounter – essence, key concerns – non-sequential thinking – some high-level models – some mid-to-high-level mechanisms • Non-goals: – performance engineering / measurement – deep programming proficiency – exhaustive survey of models and mechanisms Parallelism 2

  3. Co Concu curre rrency cy Pa Parallelism Coordinate access Use more resources to shared resources. to complete work faster. workers = computations data / work divide di ded d among sh share workers = resources data = resources Both can be expressed using a variety of primitives. Concurrency 3

  4. Concurrency via Concurrent ML • Extends SML with language features for concurrency. • Included in SML/NJ and Manticore • Model: – explicitly threaded – message-passing over channels – first-class events Concurrency 4

  5. Explicit threads: spawn licit parallelism. vs. Manticore's "hints" for im implicit workload thunk val spawn : (unit -> unit) -> thread_id let fun f () = new thread's work… val t2 = spawn f in this thread's work … end Thread 1 Thread 2 time spawn f thread 1 new thread continues runs f Concurrency 5

  6. Another thread/task model: fork-join fork : (unit -> 'a) -> 'a task "call" a function in a new thread fork fork fork join : 'a task -> 'a wait for it to "return" a result join Mainly for explicit ta task p parallelis ism , join fork not concurrency. join join (CML's threads are similar, but cooperation is different.) Concurrency 6

  7. CML: How do threads cooperate ? workload thunk val spawn : (unit -> unit) -> thread_id ✓ How do we pass values in? How do we get results of work out? let val data_in_env = … ✓ fun closures_for_the_win x = … val _ = spawn (fn () => map closures_for_the_win data_in_env ) in … end Concurrency 7

  8. CML: How do threads cooperate? workload thunk val spawn : (unit -> unit) -> thread_id How do we get results of work out? Threads co communica cate by passing messages through cha channels. type ’a chan val recv : ’a chan -> ’a val send : (’a chan * ’a) -> unit Concurrency 8

  9. Draw t Dr time ime d diag iagram. am. Tiny channel example val channel : unit -> ’a chan let val ch : int chan = channel () fun inc () = let val n = recv ch val () = send (ch, n + 1) in exit () end in spawn inc; send (ch, 3); …; recv ch end Concurrency 9

  10. Draw t Dr time ime d diag iagram. am. Concurrent streams fun makeNatStream () = let val ch = channel () fun count i = ( send (ch, i); count (i + 1) ) in spawn ( fn () => count 0); ch end fun sum stream 0 acc = acc | sum stream n acc = sum stream (n - 1) (acc + recv stream) val nats = makeNatStream () val sumFirst2 = sum nats 2 0 val sumNext2 = sum nats 2 0 Concurrency 10

  11. A common pattern: looping thread fun forever init f = let fun loop s = loop (f s) in spawn ( fn () => loop init); () end Concurrency 11

  12. Concurrent streams fun makeNatStream () = let val ch = channel () in forever 0 ( fn i => ( send (ch, i); i + 1)); ch end see cml-sieve.sml, cml-stream.sml Concurrency 12

  13. Draw t Dr time ime d diag iagram. am. Ordering? fun makeNatStream () = let val ch = channel () fun count i = ( send (ch, i); count (i + 1) ) in spawn ( fn () => count 0); ch end val nats = makeNatStream () val _ = spawn (fn () => print (Int.toString (recv nats))) val _ = print (Int.toString (recv nats)) Concurrency 13

  14. Synchronous message-passing (CML) 📟 message-passing = handshake receive blocks until a message is sent send blocks until the message received vs 📭 asy asynchronous message-passing receive blocks until a message has arrived send can finish immediately without blocking Concurrency 14

  15. Synchronous message-passing (CML) Thread 1 Thread 2 se send (ch ch, 0 , 0) blocked until another thread ch ch receives on ch. re recv ch ch 📟 time blocked until another thread ch ch sends on ch. re recv ch ch send (ch se ch, 1 , 1) 📟 Concurrency 15

  16. Asynchronous message-passing (not CML) Thread 1 Thread 2 blocked until a thread first 📭 sends on ch. se send (ch ch, 0 , 0) recv ch re ch send does not ch se send (ch ch, 0 , 0) block se send (ch ch, 0 , 0) 📭 time ch re recv ch ch re recv ch ch Concurrency 16

  17. First-class events, combinators Ev Event const structors val sendEvt : (’a chan * ’a) -> unit event val recvEvt : ’a chan -> ’a event Ev Event combinators val sync : ’a event -> ’a val choose : ’a event list -> ’a event val wrap : (’a event * (’a -> ’b)) -> ’b event val select = sync o choose Concurrency 17

  18. Utilities val recv = sync o recvEvt val send = sync o sendEvt fun forever init f = let fun loop s = loop (f s) in spawn ( fn () => loop init); () end Concurrency 18

  19. Remember: Re synchron syn onou ous s (bloc ocking) Why combinators? messag me age-pa passi ssing fun makeZipCh (inChA, inChB, outCh) = forever () ( fn () => let val (a, b) = select [ wrap (recvEvt inChA, fn a => (a, recv inChB)), wrap (recvEvt inChB, fn b => (recv inChA, b)) ] in send (outCh, (a, b)) end ) Concurrency 19

  20. More CML • Emulating mutable state via concurrency: cml-cell.sml • Dataflow / pipeline computation • Implement futures Concurrency 20

  21. Why avoid mutation? • For parallelism? • For concurrency? Other models: Shared-memory multithreading + synchronization … Concurrency 21

  22. Shared-Memory Multithreading Implicit communication through sharing. Sh Shared: heap and globals Un Unshared: locals and control pc pc pc

  23. Concurrency and Race Conditions int bal = 0; Thread 1 Thread 2 Thread 1 t1 = bal bal = t1 + 10 t1 = bal bal = t1 + 10 t2 = bal bal = t2 - 10 Thread 2 bal == 0 t2 = bal bal = t2 - 10

  24. Concurrency and Race Conditions int bal = 0; Thread 1 Thread 2 Thread 1 t1 = bal t1 = bal t2 = bal bal = t1 + 10 bal = t1 + 10 bal = t2 - 10 Thread 2 bal == -10 t2 = bal bal = t2 - 10

  25. Concurrency and Race Conditions Lock m = new Lock(); Thread 1 Thread 2 int bal = 0; acquire(m) Thread 1 t2 = bal synchronized(m) { bal = t2 - 10 t1 = bal release(m) bal = t1 + 10 acquire(m) } t1 = bal Thread 2 bal = t1 + 10 release(m) synchronized(m) { t2 = bal bal = t2 - 10 }

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend