synchronization 2 locks part 2 mutexes
play

Synchronization 2: Locks (part 2), Mutexes 1 load/store reordering - PowerPoint PPT Presentation

Synchronization 2: Locks (part 2), Mutexes 1 load/store reordering recall: out-of-order processors processors execute instructons in difgerent order hide delays from slow caches, variable computation rates, etc. convenient optimization:


  1. MSI example CPU1 writes 102 to 0xA300 0xC500 200 Shared “CPU1 is writing 0xA3000” CPU1 writes 101 to 0xA300 cache sees write: invalidate 0xA300 maybe update memory? modifjed state — nothing communicated! 100 will “fjx” later if there’s a read nothing changed yet (writeback) “What is 0xA300?” CPU2 reads 0xA300 modifjed state — must update for CPU2! “Write 102 into 0xA300” CPU2 reads 0xA300 written back to memory early (could also become Invalid at CPU1) Invalid 0xA300 CPU1 0xC400 CPU2 MEM1 address value state 0xA300 100101 Modifjed 200 Shared Shared 0xE500 300 Shared address value state 0x9300 172 21

  2. MSI example CPU1 writes 102 to 0xA300 0xC500 200 Shared “CPU1 is writing 0xA3000” CPU1 writes 101 to 0xA300 cache sees write: invalidate 0xA300 maybe update memory? modifjed state — nothing communicated! 100 will “fjx” later if there’s a read nothing changed yet (writeback) “What is 0xA300?” CPU2 reads 0xA300 modifjed state — must update for CPU2! “Write 102 into 0xA300” CPU2 reads 0xA300 written back to memory early (could also become Invalid at CPU1) Invalid 0xA300 CPU1 0xC400 CPU2 MEM1 address value state 0xA300 101102 Modifjed 200 Shared Shared 0xE500 300 Shared address value state 0x9300 172 21

  3. MSI example CPU1 writes 102 to 0xA300 0xC500 200 Shared “CPU1 is writing 0xA3000” CPU1 writes 101 to 0xA300 cache sees write: invalidate 0xA300 maybe update memory? modifjed state — nothing communicated! 100 will “fjx” later if there’s a read nothing changed yet (writeback) “What is 0xA300?” CPU2 reads 0xA300 modifjed state — must update for CPU2! “Write 102 into 0xA300” CPU2 reads 0xA300 written back to memory early (could also become Invalid at CPU1) Invalid 0xA300 CPU1 0xC400 CPU2 MEM1 address value state 0xA300 102 Modifjed 200 Shared Shared 0xE500 300 Shared address value state 0x9300 172 21

  4. MSI example CPU1 writes 102 to 0xA300 0xC500 200 Shared “CPU1 is writing 0xA3000” CPU1 writes 101 to 0xA300 cache sees write: invalidate 0xA300 maybe update memory? modifjed state — nothing communicated! 100 will “fjx” later if there’s a read nothing changed yet (writeback) “What is 0xA300?” CPU2 reads 0xA300 modifjed state — must update for CPU2! “Write 102 into 0xA300” CPU2 reads 0xA300 written back to memory early (could also become Invalid at CPU1) Invalid 0xA300 CPU1 0xC400 CPU2 MEM1 address value state 0xA300 102 Shared 200 Shared Shared 0xE500 300 Shared address value state 0x9300 172 21

  5. MSI example CPU1 writes 102 to 0xA300 0xC500 200 Shared “CPU1 is writing 0xA3000” CPU1 writes 101 to 0xA300 cache sees write: invalidate 0xA300 maybe update memory? modifjed state — nothing communicated! 100102 will “fjx” later if there’s a read nothing changed yet (writeback) “What is 0xA300?” CPU2 reads 0xA300 modifjed state — must update for CPU2! “Write 102 into 0xA300” CPU2 reads 0xA300 written back to memory early (could also become Invalid at CPU1) Shared 0xA300 CPU1 0xC400 CPU2 MEM1 address value state 0xA300 102 Shared 200 Shared Shared 0xE500 300 Shared address value state 0x9300 172 21

  6. MSI: update memory to write value (enter modifjed state), need to invalidate others can avoid sending actual value (shorter message/faster) 22 “I am writing address X ” versus “I am writing Y to address X ”

  7. MSI: on cache replacement/writeback still happens — e.g. want to store something else requires writeback if modifjed (= dirty bit) 23 changes state to invalid

  8. MSI state summary Modifjed value may be difgerent than memory and I am the only one who has it Shared Invalid I don’t have the value; I will need to ask for it 24 value is the same as memory

  9. MSI extensions extra states for unmodifjed values where no other cache has a copy avoid sending “I am writing” message later allow values to be sent directly between caches (MSI: value needs to go to memory fjrst) support not sending invalidate/etc. messages to all cores requires some tracking of what cores have each address only makes sense with non-shared-bus design 25

  10. atomic read-modfjy-write really hard to build locks for atomic load store and normal load/stores aren’t even atomic… one instruction that atomically reads and modifjes and writes back a value 26 …so processors provide read/modify/write operations

  11. x86 atomic exchange lock xchg (%ecx), %eax atomic exchange …without being interrupted by other processors, etc. 27 temp ← M[ECX] M[ECX] ← EAX EAX ← temp

  12. test-and-set: using atomic exchange one instruction that… writes a fjxed new value and reads the old value write: mark a locked as TAKEN (no matter what) read: see if it was already TAKEN (if so, only us) 28

  13. test-and-set: using atomic exchange one instruction that… writes a fjxed new value and reads the old value write: mark a locked as TAKEN (no matter what) read: see if it was already TAKEN (if so, only us) 28

  14. implementing atomic exchange get cache block into Modifjed state do read+modify+write operation while state doesn’t change recall: Modifjed state = “I am the only one with a copy” 29

  15. x86-64 spinlock with xchg mfence or mfence instruction no reordering of loads/stores across a lock Intel’s manual says: allows looping acquire to fjnish release lock by setting it to 0 (unlocked) “spin” until lock is released elsewhere if lock was already locked retry read old value set lock variable to 1 (locked) ret // then, set the_lock to 0 movl $0, the_lock // for memory order reasons release: lock variable in shared memory: the_lock ret try again // jne acquire // if the_lock wasn't 0 before: test %eax, %eax // sets %eax to prior value of the_lock // sets the_lock to 1 // swap %eax and the_lock lock xchg %eax, the_lock movl $1, %eax acquire: if 1: someone has the lock; if 0: lock is free to take 30 // %eax ← 1

  16. x86-64 spinlock with xchg mfence or mfence instruction no reordering of loads/stores across a lock Intel’s manual says: allows looping acquire to fjnish release lock by setting it to 0 (unlocked) “spin” until lock is released elsewhere if lock was already locked retry read old value set lock variable to 1 (locked) ret // then, set the_lock to 0 movl $0, the_lock // for memory order reasons release: lock variable in shared memory: the_lock ret try again // jne acquire // if the_lock wasn't 0 before: test %eax, %eax // sets %eax to prior value of the_lock // sets the_lock to 1 // swap %eax and the_lock lock xchg %eax, the_lock movl $1, %eax acquire: if 1: someone has the lock; if 0: lock is free to take 30 // %eax ← 1

  17. x86-64 spinlock with xchg mfence or mfence instruction no reordering of loads/stores across a lock Intel’s manual says: allows looping acquire to fjnish release lock by setting it to 0 (unlocked) “spin” until lock is released elsewhere if lock was already locked retry read old value set lock variable to 1 (locked) ret // then, set the_lock to 0 movl $0, the_lock // for memory order reasons release: lock variable in shared memory: the_lock ret try again // jne acquire // if the_lock wasn't 0 before: test %eax, %eax // sets %eax to prior value of the_lock // sets the_lock to 1 // swap %eax and the_lock lock xchg %eax, the_lock movl $1, %eax acquire: if 1: someone has the lock; if 0: lock is free to take 30 // %eax ← 1

  18. x86-64 spinlock with xchg mfence or mfence instruction no reordering of loads/stores across a lock Intel’s manual says: allows looping acquire to fjnish release lock by setting it to 0 (unlocked) “spin” until lock is released elsewhere if lock was already locked retry read old value set lock variable to 1 (locked) ret // then, set the_lock to 0 movl $0, the_lock // for memory order reasons release: lock variable in shared memory: the_lock ret try again // jne acquire // if the_lock wasn't 0 before: test %eax, %eax // sets %eax to prior value of the_lock // sets the_lock to 1 // swap %eax and the_lock lock xchg %eax, the_lock movl $1, %eax acquire: if 1: someone has the lock; if 0: lock is free to take 30 // %eax ← 1

  19. x86-64 spinlock with xchg mfence or mfence instruction no reordering of loads/stores across a lock Intel’s manual says: allows looping acquire to fjnish release lock by setting it to 0 (unlocked) “spin” until lock is released elsewhere if lock was already locked retry read old value set lock variable to 1 (locked) ret // then, set the_lock to 0 movl $0, the_lock // for memory order reasons release: lock variable in shared memory: the_lock ret try again // jne acquire // if the_lock wasn't 0 before: test %eax, %eax // sets %eax to prior value of the_lock // sets the_lock to 1 // swap %eax and the_lock lock xchg %eax, the_lock movl $1, %eax acquire: if 1: someone has the lock; if 0: lock is free to take 30 // %eax ← 1

  20. some common atomic operations (1) // x86: emulate with exchange old_value = memory[address]; memory[address] = 1; return old_value != 0; // e.g. set ZF flag } // x86: xchg REGISTER, (ADDRESS) exchange(register, address) { temp = memory[address]; memory[address] = register; register = temp; } 31 test − and − set(address) {

  21. some common atomic operations (2) } } register = old_value; memory[address] += register; old_value = memory[address]; fetch_and_add(address, register) { // x86: lock xaddl REGISTER, (ADDRESS) } // x86: clear ZF flag // x86: mov OLD_VALUE, %eax; lock cmpxchg NEW_VALUE, (ADDRESS) return false ; } else { // x86: set ZF flag return true ; memory[address] = new_value; if (memory[address] == old_value) { compare_and_swap(address, old_value, new_value) { 32

  22. append to singly-linked list /* } ); NULL, new_last_node) } do { */ store for the compare_and_swap assumption 2: the processor will not previous reoreder stores but nodes are not being removed, reordered, etc. assumption 1: other threads may be appending to list, 33 into *new_last_node to take place after the void append_to_list(ListNode *head, ListNode *new_last_node) { ListNode *current_last_node = head; while (current_last_node − >next) { current_last_node = current_last_node − >next; } while ( !compare_and_swap(&current_last_node − >next,

  23. common atomic operation pattern try to acquire lock, or update next pointer, or … detect if try failed if so, repeat 34

  24. exercise: fetch-and-add with compare-and-swap exercise: implement fetch-and-add with compare-and-swap compare_and_swap(address, old_value, new_value) { if (memory[address] == old_value) { memory[address] = new_value; return true ; // x86: set ZF flag } else { return false ; // x86: clear ZF flag } } 35

  25. solution long old_value; do { while (!compare_and_swap(p, old_value, old_value + amount); return old_value; } 36 long my_fetch_and_add( long *p, long amount) { old_value = *p;

  26. xv6 spinlock: acquire __sync_synchronize(); (but compiler might need more hints) on x86, xchg alone avoids processor’s reordering avoid load store reordering (including by compiler) same as loop above xchg wraps the xchgl instruction held by non-running thread don’t want to be waiting for lock } ... // references happen after the lock is acquired. void // past this point, to ensure that the critical section's memory // Tell the C compiler and the processor to not move loads or stores ; // The xchg is atomic. ... { 37 acquire( struct spinlock *lk) pushcli(); // disable interrupts to avoid deadlock. while (xchg(&lk − >locked, 1) != 0)

  27. xv6 spinlock: acquire __sync_synchronize(); (but compiler might need more hints) on x86, xchg alone avoids processor’s reordering avoid load store reordering (including by compiler) same as loop above xchg wraps the xchgl instruction held by non-running thread don’t want to be waiting for lock } ... // references happen after the lock is acquired. void // past this point, to ensure that the critical section's memory // Tell the C compiler and the processor to not move loads or stores ; // The xchg is atomic. ... pushcli(); // disable interrupts to avoid deadlock. { 37 acquire( struct spinlock *lk) while (xchg(&lk − >locked, 1) != 0)

  28. xv6 spinlock: acquire __sync_synchronize(); (but compiler might need more hints) on x86, xchg alone avoids processor’s reordering avoid load store reordering (including by compiler) same as loop above xchg wraps the xchgl instruction held by non-running thread don’t want to be waiting for lock } ... // references happen after the lock is acquired. void // past this point, to ensure that the critical section's memory // Tell the C compiler and the processor to not move loads or stores ; // The xchg is atomic. ... { 37 acquire( struct spinlock *lk) pushcli(); // disable interrupts to avoid deadlock. while (xchg(&lk − >locked, 1) != 0)

  29. xv6 spinlock: acquire __sync_synchronize(); (but compiler might need more hints) on x86, xchg alone avoids processor’s reordering avoid load store reordering (including by compiler) same as loop above xchg wraps the xchgl instruction held by non-running thread don’t want to be waiting for lock } ... // references happen after the lock is acquired. void // past this point, to ensure that the critical section's memory // Tell the C compiler and the processor to not move loads or stores ; // The xchg is atomic. ... { 37 acquire( struct spinlock *lk) pushcli(); // disable interrupts to avoid deadlock. while (xchg(&lk − >locked, 1) != 0)

  30. xv6 spinlock: release // Release the lock, equivalent to lk->locked = 0. turns into mov into lk >locked } popcli(); $0, asm volatile ("movl // not be atomic. A real OS would use C atomics here. void // This code can't use a C assignment, since it might __sync_synchronize(); // stores; __sync_synchronize() tells them both not to. // Both the C compiler and the hardware may re-order loads and // section are visible to other cores before the lock is released. // past this point, to ensure that all the stores in the critical // Tell the C compiler and the processor to not move loads or stores ... 38 release( struct spinlock *lk) ␣ ␣ %0" : "+m" (lk − >locked) : );

  31. xv6 spinlock: release // Release the lock, equivalent to lk->locked = 0. turns into mov into lk >locked } popcli(); $0, asm volatile ("movl // not be atomic. A real OS would use C atomics here. void // This code can't use a C assignment, since it might __sync_synchronize(); // stores; __sync_synchronize() tells them both not to. // Both the C compiler and the hardware may re-order loads and // section are visible to other cores before the lock is released. // past this point, to ensure that all the stores in the critical // Tell the C compiler and the processor to not move loads or stores ... 38 release( struct spinlock *lk) ␣ ␣ %0" : "+m" (lk − >locked) : );

  32. xv6 spinlock: release // Release the lock, equivalent to lk->locked = 0. turns into mov into lk >locked } popcli(); $0, asm volatile ("movl // not be atomic. A real OS would use C atomics here. void // This code can't use a C assignment, since it might __sync_synchronize(); // stores; __sync_synchronize() tells them both not to. // Both the C compiler and the hardware may re-order loads and // section are visible to other cores before the lock is released. // past this point, to ensure that all the stores in the critical // Tell the C compiler and the processor to not move loads or stores ... 38 release( struct spinlock *lk) ␣ ␣ %0" : "+m" (lk − >locked) : );

  33. xv6 spinlock: release // Release the lock, equivalent to lk->locked = 0. turns into mov into lk >locked } popcli(); $0, asm volatile ("movl // not be atomic. A real OS would use C atomics here. void // This code can't use a C assignment, since it might __sync_synchronize(); // stores; __sync_synchronize() tells them both not to. // Both the C compiler and the hardware may re-order loads and // section are visible to other cores before the lock is released. // past this point, to ensure that all the stores in the critical // Tell the C compiler and the processor to not move loads or stores ... 38 release( struct spinlock *lk) ␣ ␣ %0" : "+m" (lk − >locked) : );

  34. xv6 spinlock: debugging stufg if (!holding(lk)) } ... panic("release"); } // Record info about lock acquisition for debugging. ... panic("acquire") if (holding(lk)) ... 39 void acquire( struct spinlock *lk) { lk − >cpu = mycpu(); getcallerpcs(&lk, lk − >pcs); void release( struct spinlock *lk) { lk − >pcs[0] = 0; lk − >cpu = 0;

  35. xv6 spinlock: debugging stufg if (!holding(lk)) } ... panic("release"); } // Record info about lock acquisition for debugging. ... panic("acquire") if (holding(lk)) ... 39 void acquire( struct spinlock *lk) { lk − >cpu = mycpu(); getcallerpcs(&lk, lk − >pcs); void release( struct spinlock *lk) { lk − >pcs[0] = 0; lk − >cpu = 0;

  36. xv6 spinlock: debugging stufg if (!holding(lk)) } ... panic("release"); } // Record info about lock acquisition for debugging. ... panic("acquire") if (holding(lk)) ... 39 void acquire( struct spinlock *lk) { lk − >cpu = mycpu(); getcallerpcs(&lk, lk − >pcs); void release( struct spinlock *lk) { lk − >pcs[0] = 0; lk − >cpu = 0;

  37. xv6 spinlock: debugging stufg if (!holding(lk)) } ... panic("release"); } // Record info about lock acquisition for debugging. ... panic("acquire") if (holding(lk)) ... 39 void acquire( struct spinlock *lk) { lk − >cpu = mycpu(); getcallerpcs(&lk, lk − >pcs); void release( struct spinlock *lk) { lk − >pcs[0] = 0; lk − >cpu = 0;

  38. spinlock problems spinlocks can send a lot of messages on the shared bus makes every non-cached memory access slower… wasting CPU time waiting for another thread could we do something useful instead? 40

  39. spinlock problems spinlocks can send a lot of messages on the shared bus makes every non-cached memory access slower… wasting CPU time waiting for another thread could we do something useful instead? 41

  40. ping-ponging value some CPU (this example: CPU2) acquires lock “I want to modify lock ” CPU1 sets lock to unlocked “I want to modify lock ” (to see it is still locked) CPU3 read-modify-writes lock “I want to modify lock ” (to see it is still locked) CPU2 read-modify-writes lock “I want to modify lock ?” Invalid --- lock state address CPU1 lock CPU2 CPU3 MEM1 address value state locked Invalid Modifjed address value state lock --- 42

  41. ping-ponging value some CPU (this example: CPU2) acquires lock “I want to modify lock ” CPU1 sets lock to unlocked “I want to modify lock ” (to see it is still locked) CPU3 read-modify-writes lock “I want to modify lock ” (to see it is still locked) CPU2 read-modify-writes lock “I want to modify lock ?” Invalid --- lock state address CPU1 lock CPU2 CPU3 MEM1 address value state --- Modifjed Invalid address value state lock locked 42

  42. ping-ponging value some CPU (this example: CPU2) acquires lock “I want to modify lock ” CPU1 sets lock to unlocked “I want to modify lock ” (to see it is still locked) CPU3 read-modify-writes lock “I want to modify lock ” (to see it is still locked) CPU2 read-modify-writes lock “I want to modify lock ?” Modifjed locked lock state address CPU1 lock CPU2 CPU3 MEM1 address value state --- Invalid Invalid address value state lock --- 42

  43. ping-ponging value some CPU (this example: CPU2) acquires lock “I want to modify lock ” CPU1 sets lock to unlocked “I want to modify lock ” (to see it is still locked) CPU3 read-modify-writes lock “I want to modify lock ” (to see it is still locked) CPU2 read-modify-writes lock “I want to modify lock ?” Invalid --- lock state address CPU1 lock CPU2 CPU3 MEM1 address value state --- Modifjed Invalid address value state lock locked 42

  44. ping-ponging value some CPU (this example: CPU2) acquires lock “I want to modify lock ” CPU1 sets lock to unlocked “I want to modify lock ” (to see it is still locked) CPU3 read-modify-writes lock “I want to modify lock ” (to see it is still locked) CPU2 read-modify-writes lock “I want to modify lock ?” Modifjed locked lock state address CPU1 lock CPU2 CPU3 MEM1 address value state --- Invalid Invalid address value state lock --- 42

  45. ping-ponging value some CPU (this example: CPU2) acquires lock “I want to modify lock ” CPU1 sets lock to unlocked “I want to modify lock ” (to see it is still locked) CPU3 read-modify-writes lock “I want to modify lock ” (to see it is still locked) CPU2 read-modify-writes lock “I want to modify lock ?” Invalid lock state address CPU1 Invalid --- lock state value address unlocked Modifjed lock state value address MEM1 CPU3 CPU2 42

  46. ping-ponging address some CPU (this example: CPU2) acquires lock “I want to modify lock ” CPU1 sets lock to unlocked “I want to modify lock ” (to see it is still locked) CPU3 read-modify-writes lock “I want to modify lock ” (to see it is still locked) CPU2 read-modify-writes lock “I want to modify lock ?” Invalid lock state value Modifjed CPU1 locked lock state value address Invalid --- lock state value address MEM1 CPU3 CPU2 42

  47. ping-ponging test-and-set problem: cache block “ping-pongs” between caches each waiting processor reserves block to modify each transfer of block sends messages on bus …so bus can’t be used for real work like what the processor with the lock is doing 43

  48. test-and-test-and-set lock xchg %eax, the_lock ret try again // jne acquire // if the_lock wasn't 0 (someone else got it first): test %eax, %eax // sets %eax to prior value of the_lock // sets the_lock to 1 // swap %eax and the_lock 44 acquire: movl $1, %eax // ... so try wtih atomic swap: // before we get a chance to // but another processor might lock // lock possibly free // try again (still locked) jne acquire // unlike lock xchg --- keeps lock in Shared state! // test the lock non-atomically cmp $0, the_lock // %eax ← 1

  49. less ping-ponging CPU3 reads lock “I want to read lock ?” CPU2 reads lock (to see it is still locked) “set lock to locked” CPU1 writes back lock value, then CPU2 reads it “I want to read lock ” (to see it is still locked) --- CPU2, CPU3 continue to read lock from cache no messages on the bus “I want to modify lock ” CPU1 sets lock to unlocked “I want to modify lock ” some CPU (this example: CPU2) acquires lock (CPU1 writes back value, then CPU2 reads + modifjes it) Invalid lock CPU1 locked CPU2 CPU3 MEM1 address value state lock Modifjed state address value state lock --- Invalid address value 45

  50. less ping-ponging CPU3 reads lock CPU2 reads lock (to see it is still locked) “set lock to locked” CPU1 writes back lock value, then CPU2 reads it “I want to read lock ” (to see it is still locked) Invalid CPU2, CPU3 continue to read lock from cache no messages on the bus “I want to modify lock ” CPU1 sets lock to unlocked “I want to modify lock ” some CPU (this example: CPU2) acquires lock (CPU1 writes back value, then CPU2 reads + modifjes it) “I want to read lock ?” lock CPU1 locked CPU2 CPU3 MEM1 address value state lock Modifjed state address value state lock Invalid address value 45

  51. less ping-ponging CPU3 reads lock “I want to read lock ?” CPU2 reads lock (to see it is still locked) “set lock to locked” CPU1 writes back lock value, then CPU2 reads it “I want to read lock ” (to see it is still locked) lock CPU2, CPU3 continue to read lock from cache no messages on the bus “I want to modify lock ” CPU1 sets lock to unlocked “I want to modify lock ” some CPU (this example: CPU2) acquires lock (CPU1 writes back value, then CPU2 reads + modifjes it) Invalid state CPU1 locked CPU2 CPU3 MEM1 address value state lock Shared value address value state lock locked Shared address 45

  52. less ping-ponging CPU3 reads lock “I want to read lock ?” CPU2 reads lock (to see it is still locked) “set lock to locked” CPU1 writes back lock value, then CPU2 reads it “I want to read lock ” (to see it is still locked) locked CPU2, CPU3 continue to read lock from cache no messages on the bus “I want to modify lock ” CPU1 sets lock to unlocked “I want to modify lock ” some CPU (this example: CPU2) acquires lock (CPU1 writes back value, then CPU2 reads + modifjes it) Shared lock CPU1 locked CPU2 CPU3 MEM1 address value state lock Shared state address value state lock locked Shared address value 45

  53. less ping-ponging CPU3 reads lock “I want to read lock ?” CPU2 reads lock (to see it is still locked) “set lock to locked” CPU1 writes back lock value, then CPU2 reads it “I want to read lock ” (to see it is still locked) locked CPU2, CPU3 continue to read lock from cache no messages on the bus “I want to modify lock ” CPU1 sets lock to unlocked “I want to modify lock ” some CPU (this example: CPU2) acquires lock (CPU1 writes back value, then CPU2 reads + modifjes it) Shared lock CPU1 locked CPU2 CPU3 MEM1 address value state lock Shared state address value state lock locked Shared address value 45

  54. less ping-ponging CPU3 reads lock “I want to read lock ?” CPU2 reads lock (to see it is still locked) “set lock to locked” CPU1 writes back lock value, then CPU2 reads it “I want to read lock ” (to see it is still locked) --- CPU2, CPU3 continue to read lock from cache no messages on the bus “I want to modify lock ” CPU1 sets lock to unlocked “I want to modify lock ” some CPU (this example: CPU2) acquires lock (CPU1 writes back value, then CPU2 reads + modifjes it) Invalid lock CPU1 lockedunlocked CPU2 CPU3 MEM1 address value state lock Modifjed state address value state lock --- Invalid address value 45

  55. less ping-ponging CPU3 reads lock CPU2 reads lock (to see it is still locked) “set lock to locked” CPU1 writes back lock value, then CPU2 reads it “I want to read lock ” (to see it is still locked) Invalid CPU2, CPU3 continue to read lock from cache no messages on the bus “I want to modify lock ” CPU1 sets lock to unlocked “I want to modify lock ” some CPU (this example: CPU2) acquires lock (CPU1 writes back value, then CPU2 reads + modifjes it) “I want to read lock ?” lock CPU1 locked CPU2 CPU3 MEM1 address value state lock Modifjed state address value state lock Invalid address value 45

  56. couldn’t the read-modify-write instruction… notice that the value of the lock isn’t changing… and keep it in the shared state maybe — but extra step in “common” case (swapping difgerent values) 46

  57. more room for improvement? can still have a lot of attempts to modify locks after unlocked there other spinlock designs that avoid this ticket locks MCS locks … 47

  58. modifying cache blocks in parallel cache coherency works on cache blocks but typical memory access — less than cache block e.g. one 4-byte array element in 64-byte cache block what if two processors modify difgerent parts same cache block? 4-byte writes to 64-byte cache block cache coherency — write instructions happen one at a time: processor ‘locks’ 64-byte cache block, fetching latest version processor updates 4 bytes of 64-byte cache block later, processor might give up cache block 48

  59. modifying things in parallel (code) void sum_twice( int distance) { } pthread_join(threads[1], NULL); pthread_join(threads[0], NULL); pthread_create(&threads[1], NULL, sum_up, &array[distance]); pthread_create(&threads[0], NULL, sum_up, &array[0]); pthread_t threads[2]; 49 int array[1024]; __attribute__((aligned(4096))) } } *dest += data[i]; void *sum_up( void *raw_dest) { int *dest = ( int *) raw_dest; for ( int i = 0; i < 64 * 1024 * 1024; ++i) { /* aligned = address is mult. of 4096 */

  60. performance v. array element gap (assuming sum_up compiled to not omit memory accesses) 50 500000000 400000000 time (cycles) 300000000 200000000 100000000 0 10 20 30 40 50 60 70 distance between array elements (bytes)

  61. false sharing synchronizing to access two independent things two parts of same cache block solution: separate them 51

  62. spinlock problems spinlocks can send a lot of messages on the shared bus makes every non-cached memory access slower… wasting CPU time waiting for another thread could we do something useful instead? 52

  63. problem: busy waits ; what if it’s going to be a while? waiting for process that’s waiting for I/O? really would like to do something else with CPU instead… 53 while (xchg(&lk − >locked, 1) != 0)

  64. mutexes: intelligent waiting mutexes — locks that wait better operations still: lock, unlock instead of running infjnite loop, give away CPU lock = go to sleep, add self to list unlock = wake up sleeping thread one idea: use spinlocks to build mutexes spinlock protects list of waiters from concurrent modifjcatoin 54

  65. mutexes: intelligent waiting mutexes — locks that wait better operations still: lock, unlock instead of running infjnite loop, give away CPU lock = go to sleep, add self to list unlock = wake up sleeping thread one idea: use spinlocks to build mutexes spinlock protects list of waiters from concurrent modifjcatoin 54

  66. mutexes: intelligent waiting mutexes — locks that wait better operations still: lock, unlock instead of running infjnite loop, give away CPU lock = go to sleep, add self to list unlock = wake up sleeping thread one idea: use spinlocks to build mutexes spinlock protects list of waiters from concurrent modifjcatoin 54

  67. LockMutex(Mutex *m) { UnlockMutex(Mutex *m) { these threads are not runnable make current thread not runnable /* xv6: myproc()->state = SLEEPING; */ /* xv6: myproc()->state = RUNNABLE; */ m->lock_taken = false ; } else { m->lock_taken = true ; remove a thread from m->wait_queue } } LockSpinlock(&m->guard_spinlock); if (m->wait_queue not empty) { mutex: one possible implementation make that thread runnable } UnlockSpinlock(&m->guard_spinlock); } if woken up here, need to make sure scheduler doesn’t run us on another core until we switch to the scheduler (and save our regs) xv6 solution: acquire ptable lock Linux solution: seperate ‘on cpu’ fmags } else { run scheduler UnlockSpinlock(&m->guard_spinlock); list of threads that discovered lock is taken SpinLock guard_spinlock; bool lock_taken = false ; WaitQueue wait_queue; }; spinlock protecting lock_taken and wait_queue only held for very short amount of time (compared to mutex itself) tracks whether any thread has locked and not unlocked and are waiting for it be free struct Mutex { subtle: what if UnlockMutex() runs in between these lines? reason why we make thread not runnable before releasing guard spinlock instead of setting lock_taken to false choose thread to hand-ofg lock to LockSpinlock(&m->guard_spinlock); if (m->lock_taken) { put current thread on m->wait_queue UnlockSpinlock(&m->guard_spinlock); 55

  68. LockMutex(Mutex *m) { UnlockMutex(Mutex *m) { these threads are not runnable make current thread not runnable /* xv6: myproc()->state = SLEEPING; */ /* xv6: myproc()->state = RUNNABLE; */ m->lock_taken = false ; } else { m->lock_taken = true ; remove a thread from m->wait_queue } } LockSpinlock(&m->guard_spinlock); if (m->wait_queue not empty) { mutex: one possible implementation make that thread runnable } UnlockSpinlock(&m->guard_spinlock); } if woken up here, need to make sure scheduler doesn’t run us on another core until we switch to the scheduler (and save our regs) xv6 solution: acquire ptable lock Linux solution: seperate ‘on cpu’ fmags } else { run scheduler UnlockSpinlock(&m->guard_spinlock); list of threads that discovered lock is taken SpinLock guard_spinlock; bool lock_taken = false ; WaitQueue wait_queue; }; spinlock protecting lock_taken and wait_queue only held for very short amount of time (compared to mutex itself) tracks whether any thread has locked and not unlocked and are waiting for it be free struct Mutex { subtle: what if UnlockMutex() runs in between these lines? reason why we make thread not runnable before releasing guard spinlock instead of setting lock_taken to false choose thread to hand-ofg lock to LockSpinlock(&m->guard_spinlock); if (m->lock_taken) { put current thread on m->wait_queue UnlockSpinlock(&m->guard_spinlock); 55

  69. LockMutex(Mutex *m) { UnlockMutex(Mutex *m) { these threads are not runnable make current thread not runnable /* xv6: myproc()->state = SLEEPING; */ /* xv6: myproc()->state = RUNNABLE; */ m->lock_taken = false ; } else { m->lock_taken = true ; remove a thread from m->wait_queue } } LockSpinlock(&m->guard_spinlock); if (m->wait_queue not empty) { mutex: one possible implementation make that thread runnable } UnlockSpinlock(&m->guard_spinlock); } if woken up here, need to make sure scheduler doesn’t run us on another core until we switch to the scheduler (and save our regs) xv6 solution: acquire ptable lock Linux solution: seperate ‘on cpu’ fmags } else { run scheduler UnlockSpinlock(&m->guard_spinlock); list of threads that discovered lock is taken SpinLock guard_spinlock; bool lock_taken = false ; WaitQueue wait_queue; }; spinlock protecting lock_taken and wait_queue only held for very short amount of time (compared to mutex itself) tracks whether any thread has locked and not unlocked and are waiting for it be free struct Mutex { subtle: what if UnlockMutex() runs in between these lines? reason why we make thread not runnable before releasing guard spinlock instead of setting lock_taken to false choose thread to hand-ofg lock to LockSpinlock(&m->guard_spinlock); if (m->lock_taken) { put current thread on m->wait_queue UnlockSpinlock(&m->guard_spinlock); 55

  70. LockMutex(Mutex *m) { UnlockMutex(Mutex *m) { make current thread not runnable /* xv6: myproc()->state = SLEEPING; */ /* xv6: myproc()->state = RUNNABLE; */ m->lock_taken = false ; } else { m->lock_taken = true ; } LockSpinlock(&m->guard_spinlock); UnlockSpinlock(&m->guard_spinlock); } Linux solution: seperate ‘on cpu’ fmags xv6 solution: acquire ptable lock } switch to the scheduler (and save our regs) doesn’t run us on another core until we UnlockSpinlock(&m->guard_spinlock); if (m->wait_queue not empty) { remove a thread from m->wait_queue make that thread runnable } else { if woken up here, need to make sure scheduler } mutex: one possible implementation run scheduler struct Mutex { list of threads that discovered lock is taken SpinLock guard_spinlock; bool lock_taken = false ; WaitQueue wait_queue; }; spinlock protecting lock_taken and wait_queue only held for very short amount of time (compared to mutex itself) tracks whether any thread has locked and not unlocked and are waiting for it be free UnlockSpinlock(&m->guard_spinlock); subtle: what if UnlockMutex() runs in between these lines? reason why we make thread not runnable before releasing guard spinlock instead of setting lock_taken to false choose thread to hand-ofg lock to LockSpinlock(&m->guard_spinlock); if (m->lock_taken) { put current thread on m->wait_queue 55 these threads are not runnable

  71. these threads are not runnable mutex: one possible implementation } else { UnlockSpinlock(&m->guard_spinlock); } } LockSpinlock(&m->guard_spinlock); if (m->wait_queue not empty) { remove a thread from m->wait_queue make that thread runnable } else { UnlockSpinlock(&m->guard_spinlock); } UnlockSpinlock(&m->guard_spinlock); } if woken up here, need to make sure scheduler doesn’t run us on another core until we switch to the scheduler (and save our regs) xv6 solution: acquire ptable lock Linux solution: seperate ‘on cpu’ fmags struct Mutex { run scheduler 55 WaitQueue wait_queue; spinlock protecting lock_taken and wait_queue only held for very short amount of time (compared to mutex itself) tracks whether any thread has locked and not unlocked list of threads that discovered lock is taken bool lock_taken = false ; and are waiting for it be free subtle: what if UnlockMutex() runs in between these lines? reason why we make thread not runnable before releasing guard spinlock instead of setting lock_taken to false choose thread to hand-ofg lock to LockSpinlock(&m->guard_spinlock); if (m->lock_taken) { SpinLock guard_spinlock; put current thread on m->wait_queue }; LockMutex(Mutex *m) { UnlockMutex(Mutex *m) { make current thread not runnable /* xv6: myproc()->state = SLEEPING; */ /* xv6: myproc()->state = RUNNABLE; */ m->lock_taken = false ; m->lock_taken = true ;

  72. these threads are not runnable mutex: one possible implementation UnlockSpinlock(&m->guard_spinlock); } } LockSpinlock(&m->guard_spinlock); if (m->wait_queue not empty) { remove a thread from m->wait_queue make that thread runnable } else { UnlockSpinlock(&m->guard_spinlock); } UnlockSpinlock(&m->guard_spinlock); } if woken up here, need to make sure scheduler doesn’t run us on another core until we switch to the scheduler (and save our regs) xv6 solution: acquire ptable lock Linux solution: seperate ‘on cpu’ fmags struct Mutex { run scheduler 55 and are waiting for it be free SpinLock guard_spinlock; bool lock_taken = false ; WaitQueue wait_queue; }; spinlock protecting lock_taken and wait_queue only held for very short amount of time (compared to mutex itself) tracks whether any thread has locked and not unlocked list of threads that discovered lock is taken subtle: what if UnlockMutex() runs in between these lines? reason why we make thread not runnable before releasing guard spinlock instead of setting lock_taken to false choose thread to hand-ofg lock to LockSpinlock(&m->guard_spinlock); if (m->lock_taken) { put current thread on m->wait_queue LockMutex(Mutex *m) { UnlockMutex(Mutex *m) { make current thread not runnable /* xv6: myproc()->state = SLEEPING; */ /* xv6: myproc()->state = RUNNABLE; */ m->lock_taken = false ; } else { m->lock_taken = true ;

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend