Quantitative Reasoning for Proving Lock-Freedom Jan Ho ff mann, - - PowerPoint PPT Presentation

quantitative reasoning for proving lock freedom
SMART_READER_LITE
LIVE PREVIEW

Quantitative Reasoning for Proving Lock-Freedom Jan Ho ff mann, - - PowerPoint PPT Presentation

Quantitative Reasoning for Proving Lock-Freedom Jan Ho ff mann, Michael Marmar, and Zhong Shao Quantitative Reasoning for Proving Lock-Freedom Jan Ho ff mann, Michael Marmar, and Zhong Shao Mike is at LICS too. Quantitative Reasoning for


slide-1
SLIDE 1

Jan Hoffmann, Michael Marmar, and Zhong Shao

Quantitative Reasoning for Proving Lock-Freedom

slide-2
SLIDE 2

Jan Hoffmann, Michael Marmar, and Zhong Shao

Quantitative Reasoning for Proving Lock-Freedom

Mike is at LICS too.

slide-3
SLIDE 3

Jan Hoffmann, Michael Marmar, and Zhong Shao

Quantitative Reasoning for Proving Lock-Freedom

slide-4
SLIDE 4

Concurrent Data Structures

Multiprocessor OS Kernel

Threads Shared Memory

slide-5
SLIDE 5

Concurrent Data Structures

Multiprocessor OS Kernel

1 77 2 8128

Threads Shared Memory

slide-6
SLIDE 6

Concurrent Data Structures

Multiprocessor OS Kernel

1 77 2

Threads Shared Memory

slide-7
SLIDE 7

Concurrent Data Structures

Multiprocessor OS Kernel

1 77 2 666

Threads Shared Memory

2012

slide-8
SLIDE 8

Concurrent Data Structures

Multiprocessor OS Kernel

1 77 2

Threads Shared Memory

slide-9
SLIDE 9

Concurrent Data Structures

Multiprocessor OS Kernel

1 77 2

Threads Shared Memory

Need synchronization to avoid race conditions.

slide-10
SLIDE 10

Non-Blocking Synchronization

slide-11
SLIDE 11

Non-Blocking Synchronization

  • Classical Synchronization: locks ensure mutual exclusion of threads

➡ Performance issues on modern multiprocessor architectures

  • Blocking (busy waiting)
  • Cache-coherency (high memory contention)
slide-12
SLIDE 12

Non-Blocking Synchronization

  • Classical Synchronization: locks ensure mutual exclusion of threads

➡ Performance issues on modern multiprocessor architectures

  • Blocking (busy waiting)
  • Cache-coherency (high memory contention)
  • Non-Blocking Synchronization: shared data is accessed without locks

➡ Outperforms lock-based synchronization in many scenarios

  • Interference of threads possible
  • Need to ensure consistency of the data structure
slide-13
SLIDE 13

How to Ensure Consistency Without Locks?

slide-14
SLIDE 14

How to Ensure Consistency Without Locks?

  • Attempt to perform an operation
  • Repeat operations after interference has been detected
slide-15
SLIDE 15

How to Ensure Consistency Without Locks?

  • Attempt to perform an operation
  • Repeat operations after interference has been detected

Optimistic synchronization.

slide-16
SLIDE 16

How to Ensure Consistency Without Locks?

  • Attempt to perform an operation
  • Repeat operations after interference has been detected
  • Ensure that a concurrent execution is equivalent to some sequential

execution

  • Desired properties: linearizability or serializability
  • Different program logics exist, e.g., in [Fu et al. 2010]
  • Contextual refinement, e.g., in [Liang et al. 2013]

Optimistic synchronization.

slide-17
SLIDE 17

How to Ensure Consistency Without Locks?

  • Attempt to perform an operation
  • Repeat operations after interference has been detected
  • Ensure that a concurrent execution is equivalent to some sequential

execution

  • Desired properties: linearizability or serializability
  • Different program logics exist, e.g., in [Fu et al. 2010]
  • Contextual refinement, e.g., in [Liang et al. 2013]

Optimistic synchronization. Sequential consistency.

slide-18
SLIDE 18

How to Ensure Consistency Without Locks?

  • Attempt to perform an operation
  • Repeat operations after interference has been detected
  • Ensure that a concurrent execution is equivalent to some sequential

execution

  • Desired properties: linearizability or serializability
  • Different program logics exist, e.g., in [Fu et al. 2010]
  • Contextual refinement, e.g., in [Liang et al. 2013]
  • But: We also need additional progress guarantees.

Optimistic synchronization. Sequential consistency.

slide-19
SLIDE 19

Sequential Consistency is Not Enough

Livelocks

1 77 2 8128

Threads Shared Memory

slide-20
SLIDE 20

Sequential Consistency is Not Enough

Livelocks

1 77 2 8128

Threads Shared Memory

Interference Repeat

  • peration

Repeat

  • peration
slide-21
SLIDE 21

Sequential Consistency is Not Enough

Livelocks

1 77 2 8128

Threads Shared Memory

Data structure is consistent but system is stuck (livelock).

Interference Repeat

  • peration

Repeat

  • peration
slide-22
SLIDE 22

Progress Properties

Let be a shared-memory data structure with operations

  • Assume a system with m threads that access exclusively via the
  • perations
  • Assume that all code outside the data structure operations

terminates

  • Fix an arbitrary scheduling of the m threads in which one or more
  • perations have been started

π1, ... , πk π1, ... , πk D πi D πi

slide-23
SLIDE 23

Progress Properties

  • A wait-free implementation guarantees that every thread can complete

any started operation of the data structure in a finite number of steps

  • A lock-free implementation guarantees that some thread will complete

an operation in a finite number of steps

  • An obstruction-free implementation guarantees the completion of an
  • peration for any thread that eventually executes in isolation
  • Wait-freedom implies lock-freedom
  • Lock-freedom implies obstruction-freedom
slide-24
SLIDE 24

Progress Properties

  • A wait-free implementation guarantees that every thread can complete

any started operation of the data structure in a finite number of steps

  • A lock-free implementation guarantees that some thread will complete

an operation in a finite number of steps

  • An obstruction-free implementation guarantees the completion of an
  • peration for any thread that eventually executes in isolation
  • Wait-freedom implies lock-freedom
  • Lock-freedom implies obstruction-freedom

No livelocks and no starvation.

slide-25
SLIDE 25

Progress Properties

  • A wait-free implementation guarantees that every thread can complete

any started operation of the data structure in a finite number of steps

  • A lock-free implementation guarantees that some thread will complete

an operation in a finite number of steps

  • An obstruction-free implementation guarantees the completion of an
  • peration for any thread that eventually executes in isolation
  • Wait-freedom implies lock-freedom
  • Lock-freedom implies obstruction-freedom

No livelocks and no starvation. No livelocks.

slide-26
SLIDE 26

Progress Properties

  • A wait-free implementation guarantees that every thread can complete

any started operation of the data structure in a finite number of steps

  • A lock-free implementation guarantees that some thread will complete

an operation in a finite number of steps

  • An obstruction-free implementation guarantees the completion of an
  • peration for any thread that eventually executes in isolation
  • Wait-freedom implies lock-freedom
  • Lock-freedom implies obstruction-freedom

No livelocks and no starvation. No livelocks.

Liang et al. Characterizing Progress Properties via Contextual Refinements. CONCUR’13.

slide-27
SLIDE 27

Our Results

New quantitative technique to verify lock-freedom

  • Uses quant. compensation schemes to pay for possible interference
  • Enables local and modular reasoning
  • Our paper: Formalization based on concurrent separation logic [O’Hearn

2007] and quantitative separation logic [Atkey 2010]

  • Running example: Treiber’s non-blocking stack (a classic lock-free data

structure)

slide-28
SLIDE 28

Our Results

New quantitative technique to verify lock-freedom

  • Uses quant. compensation schemes to pay for possible interference
  • Enables local and modular reasoning
  • Our paper: Formalization based on concurrent separation logic [O’Hearn

2007] and quantitative separation logic [Atkey 2010]

  • Running example: Treiber’s non-blocking stack (a classic lock-free data

structure) Sweet spot: strong progress guaranty and efficient, elegant implementations.

slide-29
SLIDE 29

Our Results

New quantitative technique to verify lock-freedom

  • Uses quant. compensation schemes to pay for possible interference
  • Enables local and modular reasoning
  • Our paper: Formalization based on concurrent separation logic [O’Hearn

2007] and quantitative separation logic [Atkey 2010]

  • Running example: Treiber’s non-blocking stack (a classic lock-free data

structure) Sweet spot: strong progress guaranty and efficient, elegant implementations. Classically: temporal logic and whole program analysis.

slide-30
SLIDE 30

Treiber’s Non-Blocking Stack

struct Node { value_t data; Node *next; }; Node *S; void init() { S = NULL; }

slide-31
SLIDE 31

Treiber’s Non-Blocking Stack

struct Node { value_t data; Node *next; }; Node *S; void init() { S = NULL; }

Stack is a linked list.

slide-32
SLIDE 32

Treiber’s Non-Blocking Stack

struct Node { value_t data; Node *next; }; Node *S; void init() { S = NULL; }

Stack is a linked list. Shared pointer to the top element.

slide-33
SLIDE 33

Treiber’s Non-Blocking Stack

struct Node { value_t data; Node *next; }; Node *S; void init() { S = NULL; } void push(value_t v) { Node *t, *x; x = new Node(); x->data = v; do { t = S; x->next = t; } while(!CAS(&S,t,x)); }

slide-34
SLIDE 34

Treiber’s Non-Blocking Stack

struct Node { value_t data; Node *next; }; Node *S; void init() { S = NULL; } void push(value_t v) { Node *t, *x; x = new Node(); x->data = v; do { t = S; x->next = t; } while(!CAS(&S,t,x)); }

Prepare update.

slide-35
SLIDE 35

Treiber’s Non-Blocking Stack

struct Node { value_t data; Node *next; }; Node *S; void init() { S = NULL; } void push(value_t v) { Node *t, *x; x = new Node(); x->data = v; do { t = S; x->next = t; } while(!CAS(&S,t,x)); }

Compare-and-swap

  • peration: if the address

&S contains t then write x into &S and return true; else return false Prepare update.

slide-36
SLIDE 36

Treiber’s Non-Blocking Stack

struct Node { value_t data; Node *next; }; Node *S; void init() { S = NULL; } void push(value_t v) { Node *t, *x; x = new Node(); x->data = v; do { t = S; x->next = t; } while(!CAS(&S,t,x)); }

Compare-and-swap

  • peration: if the address

&S contains t then write x into &S and return true; else return false CAS-guarded while-loop to detect interference. Prepare update.

slide-37
SLIDE 37

Treiber’s Non-Blocking Stack

struct Node { value_t data; Node *next; }; Node *S; void init() { S = NULL; } void push(value_t v) { Node *t, *x; x = new Node(); x->data = v; do { t = S; x->next = t; } while(!CAS(&S,t,x)); } value_t pop() { Node *t, *x; do { t = S; if (t == NULL) { return EMPTY; } x = t->next; } while(!CAS(&S,t,x)); return t->data; }

slide-38
SLIDE 38

Treiber’s Non-Blocking Stack

struct Node { value_t data; Node *next; }; Node *S; void init() { S = NULL; } void push(value_t v) { Node *t, *x; x = new Node(); x->data = v; do { t = S; x->next = t; } while(!CAS(&S,t,x)); } value_t pop() { Node *t, *x; do { t = S; if (t == NULL) { return EMPTY; } x = t->next; } while(!CAS(&S,t,x)); return t->data; }

CAS-guarded while-loop to detect interference.

slide-39
SLIDE 39

Proving Lock-Freedom: Step One

Reducing the problem to termination Let be a share-memory data structure with operations .

π1, ... , πk D

Definitions

Sn = {op1; ... ; opn | ∀i : opi ∈ {π1, ... , πk}} S = [

n∈N

Sn Pm = { n

i=1,...,m

si | ∀i : si ∈ S} P = [

m∈N

Pm

Theorem

D is lock-free ⇐ ⇒ every P ∈ P terminiates

slide-40
SLIDE 40

Proving Lock-Freedom: Step One

Reducing the problem to termination Let be a share-memory data structure with operations .

π1, ... , πk D

Definitions

Sn = {op1; ... ; opn | ∀i : opi ∈ {π1, ... , πk}} S = [

n∈N

Sn Pm = { n

i=1,...,m

si | ∀i : si ∈ S} P = [

m∈N

Pm

Theorem

D is lock-free ⇐ ⇒ every P ∈ P terminiates

m threads execute finite number of

  • perations
slide-41
SLIDE 41

Proving Lock-Freedom: Step One

Reducing the problem to termination Let be a share-memory data structure with operations .

π1, ... , πk D

Definitions

Sn = {op1; ... ; opn | ∀i : opi ∈ {π1, ... , πk}} S = [

n∈N

Sn Pm = { n

i=1,...,m

si | ∀i : si ∈ S} P = [

m∈N

Pm

Theorem

D is lock-free ⇐ ⇒ every P ∈ P terminiates

m threads execute finite number of

  • perations

Problem: Termination of one thread depends of the behavior of

  • ther threads.
slide-42
SLIDE 42

Proving Lock-Freedom: Step Two

Prove an upper bound on the number of loop iterations How to reason about one operation locally when the number of loop iterations depends on the other threads? Observation In a lock-free data structure, every iteration of an unsuccessful operation is caused by the successful completion of an operation by another thread.

P ∈ P πi

Idea: Threads that perform a successful operation have to pay for the additional loop iterations that they (potentially) cause in other threads.

slide-43
SLIDE 43

Proving Lock-Freedom: Step Two

Prove an upper bound on the number of loop iterations How to reason about one operation locally when the number of loop iterations depends on the other threads? Observation In a lock-free data structure, every iteration of an unsuccessful operation is caused by the successful completion of an operation by another thread.

P ∈ P πi

Idea: Threads that perform a successful operation have to pay for the additional loop iterations that they (potentially) cause in other threads. Quantitative compensation scheme.

slide-44
SLIDE 44

A Quantitative Compensation Scheme for Treiber’s Stack

slide-45
SLIDE 45

A Quantitative Compensation Scheme for Treiber’s Stack

  • Every thread has a number of tokens to pay for its loop iterations
  • For Treiber’s stack, a thread needs m tokens to execute a push or pop
  • After the execution the tokens disappear
  • If n operations are executed then m*n is a bound on the loop iterations
slide-46
SLIDE 46

A Quantitative Compensation Scheme for Treiber’s Stack

  • Every thread has a number of tokens to pay for its loop iterations
  • For Treiber’s stack, a thread needs m tokens to execute a push or pop
  • After the execution the tokens disappear
  • If n operations are executed then m*n is a bound on the loop iterations

Number of threads.

slide-47
SLIDE 47

A Quantitative Compensation Scheme for Treiber’s Stack

  • Every thread has a number of tokens to pay for its loop iterations
  • For Treiber’s stack, a thread needs m tokens to execute a push or pop
  • After the execution the tokens disappear
  • If n operations are executed then m*n is a bound on the loop iterations
  • Successful push/pop: 1 token is used to pay for the loop iteration

s m-1 tokens are transferred to other threads

  • Unsuccessful push/pop: 1 token is used to pay for the loop iteration

s 1 token is received from a successful thread Number of threads.

slide-48
SLIDE 48

A Quantitative Compensation Scheme for Treiber’s Stack

  • Every thread has a number of tokens to pay for its loop iterations
  • For Treiber’s stack, a thread needs m tokens to execute a push or pop
  • After the execution the tokens disappear
  • If n operations are executed then m*n is a bound on the loop iterations
  • Successful push/pop: 1 token is used to pay for the loop iteration

s m-1 tokens are transferred to other threads

  • Unsuccessful push/pop: 1 token is used to pay for the loop iteration

s 1 token is received from a successful thread Number of threads. Availability of m tokens is a loop invariant.

slide-49
SLIDE 49

A Quantitative Compensation Scheme for Treiber’s Stack

8128

Threads Shared Memory

77 2 1

slide-50
SLIDE 50

A Quantitative Compensation Scheme for Treiber’s Stack

8128

Threads Shared Memory

77 2 1 void push(value_t v) { Node *t, *x; x = new Node(); x->data = v; do { t = S; x->next = t; } while(!CAS(&S,t,x)); }

slide-51
SLIDE 51

A Quantitative Compensation Scheme for Treiber’s Stack

8128

Threads Shared Memory

77 2 1 void push(value_t v) { Node *t, *x; x = new Node(); x->data = v; do { t = S; x->next = t; } while(!CAS(&S,t,x)); }

1 1 1 1 1 1 1 1 1 1 1 1

Need m tokens for each push operation.

slide-52
SLIDE 52

A Quantitative Compensation Scheme for Treiber’s Stack

8128

Threads Shared Memory

77 2 1 void push(value_t v) { Node *t, *x; x = new Node(); x->data = v; do { t = S; x->next = t; } while(!CAS(&S,t,x)); }

1 1 1 1 1 1 1 1 1 1 1 1

Need m tokens for each push operation.

slide-53
SLIDE 53

A Quantitative Compensation Scheme for Treiber’s Stack

8128

Threads Shared Memory

77 2 1 void push(value_t v) { Node *t, *x; x = new Node(); x->data = v; do { t = S; x->next = t; } while(!CAS(&S,t,x)); }

1 1 1 1 1 1 1 1 1 1 1 1

slide-54
SLIDE 54

A Quantitative Compensation Scheme for Treiber’s Stack

8128

Threads Shared Memory

77 2 1 void push(value_t v) { Node *t, *x; x = new Node(); x->data = v; do { t = S; x->next = t; } while(!CAS(&S,t,x)); }

1 1 1 1 1 1 1 1 1

slide-55
SLIDE 55

A Quantitative Compensation Scheme for Treiber’s Stack

8128

Threads Shared Memory

77 2 1 void push(value_t v) { Node *t, *x; x = new Node(); x->data = v; do { t = S; x->next = t; } while(!CAS(&S,t,x)); }

1 1 1 1 1 1 1 1 1

Failed Failed Succeeded

slide-56
SLIDE 56

A Quantitative Compensation Scheme for Treiber’s Stack

8128

Threads Shared Memory

77 2 1 void push(value_t v) { Node *t, *x; x = new Node(); x->data = v; do { t = S; x->next = t; } while(!CAS(&S,t,x)); }

1 1 1 1 1 1 1 1 1

Failed Failed Succeeded

slide-57
SLIDE 57

A Quantitative Compensation Scheme for Treiber’s Stack

8128

Threads Shared Memory

77 2 1 void push(value_t v) { Node *t, *x; x = new Node(); x->data = v; do { t = S; x->next = t; } while(!CAS(&S,t,x)); }

1 1 1 1 1 1 1 1 1

Failed Failed Succeeded

Exits loop. Pays m-1 tokens to

  • ther threads.

Iterate loop. Receive a token from successful thread. Iterate loop. Receive a token from successful thread.

slide-58
SLIDE 58

A Quantitative Compensation Scheme for Treiber’s Stack

8128

Threads Shared Memory

77 2 1 void push(value_t v) { Node *t, *x; x = new Node(); x->data = v; do { t = S; x->next = t; } while(!CAS(&S,t,x)); }

1 1 1 1 1 1 1 1 1

Failed Failed Succeeded

Exits loop. Pays m-1 tokens to

  • ther threads.

Iterate loop. Receive a token from successful thread. Iterate loop. Receive a token from successful thread.

slide-59
SLIDE 59

A Quantitative Compensation Scheme for Treiber’s Stack

8128

Threads Shared Memory

77 2 1 void push(value_t v) { Node *t, *x; x = new Node(); x->data = v; do { t = S; x->next = t; } while(!CAS(&S,t,x)); }

1 1 1 1 1 1 1 1 1

slide-60
SLIDE 60

A Quantitative Compensation Scheme for Treiber’s Stack

8128

Threads Shared Memory

77 2 1 void push(value_t v) { Node *t, *x; x = new Node(); x->data = v; do { t = S; x->next = t; } while(!CAS(&S,t,x)); }

1 1 1 1 1 1 1 1 1

Treiber’s stack is lock- free iff we have enough tokens to pay for all loop iterations.

slide-61
SLIDE 61

Quantitative Reasoning in Separation Logic

[P] C [Q] [P ∗ R] C [Q ∗ R]

(FRAME)

[P] C [Q]

slide-62
SLIDE 62

Quantitative Reasoning in Separation Logic

P and Q are predicates on program states.

[P] C [Q] [P ∗ R] C [Q ∗ R]

(FRAME)

[P] C [Q]

slide-63
SLIDE 63

Quantitative Reasoning in Separation Logic

P and Q are predicates on program states. Total correctness: if program C is started in a state which satisfies P then C terminates and Q holds after the execution.

[P] C [Q] [P ∗ R] C [Q ∗ R]

(FRAME)

[P] C [Q]

slide-64
SLIDE 64

Quantitative Reasoning in Separation Logic

Total correctness: if program C is started in a state which satisfies P then C terminates and Q holds after the execution. Separating conjunction: heap can be split so that

  • ne part satisfies P and

the other part satisfies R.

[P] C [Q] [P ∗ R] C [Q ∗ R]

(FRAME)

[P] C [Q]

slide-65
SLIDE 65

Quantitative Reasoning in Separation Logic

Total correctness: if program C is started in a state which satisfies P then C terminates and Q holds after the execution. Separating conjunction: heap can be split so that

  • ne part satisfies P and

the other part satisfies R. Frame rule for modular and local reasoning.

[P] C [Q] [P ∗ R] C [Q ∗ R]

(FRAME)

[P] C [Q]

slide-66
SLIDE 66

Quantitative Reasoning in Separation Logic

Total correctness: if program C is started in a state which satisfies P then C terminates and Q holds after the execution. Frame rule for modular and local reasoning.

[P] C [Q] [P ∗ R] C [Q ∗ R]

(FRAME)

[P] C [Q]

slide-67
SLIDE 67

Quantitative Reasoning in Separation Logic

Total correctness: if program C is started in a state which satisfies P then C terminates and Q holds after the execution. Frame rule for modular and local reasoning.

[P] C [Q] [P ∗ R] C [Q ∗ R]

(FRAME)

[P] C [Q] conjunction ♦ ⇤ P,

(Affine) predicate for tokens:

en ♦ found

⇐ ⇒

ens ♦ ⇤ . . . ⇤ ♦ by ♦n. Appendix IV.

slide-68
SLIDE 68

Quantitative Reasoning in Separation Logic

Total correctness: if program C is started in a state which satisfies P then C terminates and Q holds after the execution. Frame rule for modular and local reasoning. Use separating conjunction to combine it with other predicates.

[P] C [Q] [P ∗ R] C [Q ∗ R]

(FRAME)

[P] C [Q] conjunction ♦ ⇤ P,

(Affine) predicate for tokens:

en ♦ found

⇐ ⇒

ens ♦ ⇤ . . . ⇤ ♦ by ♦n. Appendix IV.

slide-69
SLIDE 69

Quantitative Reasoning in Separation Logic

Total correctness: if program C is started in a state which satisfies P then C terminates and Q holds after the execution. Frame rule for modular and local reasoning. Use separating conjunction to combine it with other predicates. There are n tokens available.

[P] C [Q] [P ∗ R] C [Q ∗ R]

(FRAME)

[P] C [Q] conjunction ♦ ⇤ P,

(Affine) predicate for tokens:

en ♦ found

⇐ ⇒

ens ♦ ⇤ . . . ⇤ ♦ by ♦n. Appendix IV.

slide-70
SLIDE 70

Quantitative Reasoning in Separation Logic

Total correctness: if program C is started in a state which satisfies P then C terminates and Q holds after the execution. Frame rule for modular and local reasoning.

[P] C [Q] [P ∗ R] C [Q ∗ R]

(FRAME)

[P] C [Q] conjunction ♦ ⇤ P,

(Affine) predicate for tokens:

en ♦ found

⇐ ⇒

ens ♦ ⇤ . . . ⇤ ♦ by ♦n. Appendix IV.

slide-71
SLIDE 71

P ^ B = ) P 0 ⇤ ♦ I ` [P 0] C [P] I ` [P] while B do C [P ^ ¬B]

(WHILE)

Quantitative Reasoning in Separation Logic

Total correctness: if program C is started in a state which satisfies P then C terminates and Q holds after the execution. Frame rule for modular and local reasoning.

[P] C [Q] [P ∗ R] C [Q ∗ R]

(FRAME)

[P] C [Q] conjunction ♦ ⇤ P,

(Affine) predicate for tokens:

en ♦ found

⇐ ⇒

ens ♦ ⇤ . . . ⇤ ♦ by ♦n. Appendix IV.

slide-72
SLIDE 72

P ^ B = ) P 0 ⇤ ♦ I ` [P 0] C [P] I ` [P] while B do C [P ^ ¬B]

(WHILE)

Quantitative Reasoning in Separation Logic

Total correctness: if program C is started in a state which satisfies P then C terminates and Q holds after the execution. Frame rule for modular and local reasoning. Consume one token in each iteration

[P] C [Q] [P ∗ R] C [Q ∗ R]

(FRAME)

[P] C [Q] conjunction ♦ ⇤ P,

(Affine) predicate for tokens:

en ♦ found

⇐ ⇒

ens ♦ ⇤ . . . ⇤ ♦ by ♦n. Appendix IV.

slide-73
SLIDE 73

Concurrent Separation Logic

xtension to CCRs is possible. judgment I ` [P] C [Q] , in a state where emp ` [P ⇤ I] C [Q ⇤ I] I ` [P] atomic{C} [Q]

(ATOM)

I ` [P1] C1 [Q1] I ` [P2] C2 [Q2] I ` [P1 ⇤ P2] C1 k C2 [Q1 ⇤ Q2]

(PAR)

slide-74
SLIDE 74

Concurrent Separation Logic

Resource invariant: another predicate.

xtension to CCRs is possible. judgment I ` [P] C [Q] , in a state where emp ` [P ⇤ I] C [Q ⇤ I] I ` [P] atomic{C} [Q]

(ATOM)

I ` [P1] C1 [Q1] I ` [P2] C2 [Q2] I ` [P1 ⇤ P2] C1 k C2 [Q1 ⇤ Q2]

(PAR)

slide-75
SLIDE 75

Concurrent Separation Logic

Resource invariant: another predicate. Describes the shared memory.

xtension to CCRs is possible. judgment I ` [P] C [Q] , in a state where emp ` [P ⇤ I] C [Q ⇤ I] I ` [P] atomic{C} [Q]

(ATOM)

I ` [P1] C1 [Q1] I ` [P2] C2 [Q2] I ` [P1 ⇤ P2] C1 k C2 [Q1 ⇤ Q2]

(PAR)

slide-76
SLIDE 76

Concurrent Separation Logic

Resource invariant: another predicate. Describes the shared memory. Shared memory can only be accessed in atomic blocks.

xtension to CCRs is possible. judgment I ` [P] C [Q] , in a state where emp ` [P ⇤ I] C [Q ⇤ I] I ` [P] atomic{C} [Q]

(ATOM)

I ` [P1] C1 [Q1] I ` [P2] C2 [Q2] I ` [P1 ⇤ P2] C1 k C2 [Q1 ⇤ Q2]

(PAR)

slide-77
SLIDE 77

Concurrent Separation Logic

Resource invariant: another predicate. Describes the shared memory. Shared memory can only be accessed in atomic blocks. Assume invariant I holds. Ensure invariant I holds.

xtension to CCRs is possible. judgment I ` [P] C [Q] , in a state where emp ` [P ⇤ I] C [Q ⇤ I] I ` [P] atomic{C} [Q]

(ATOM)

I ` [P1] C1 [Q1] I ` [P2] C2 [Q2] I ` [P1 ⇤ P2] C1 k C2 [Q1 ⇤ Q2]

(PAR)

slide-78
SLIDE 78

Concurrent Separation Logic

Resource invariant: another predicate. Describes the shared memory. Shared memory can only be accessed in atomic blocks. Assume invariant I holds. Ensure invariant I holds. Parallel composition.

xtension to CCRs is possible. judgment I ` [P] C [Q] , in a state where emp ` [P ⇤ I] C [Q ⇤ I] I ` [P] atomic{C} [Q]

(ATOM)

I ` [P1] C1 [Q1] I ` [P2] C2 [Q2] I ` [P1 ⇤ P2] C1 k C2 [Q1 ⇤ Q2]

(PAR)

slide-79
SLIDE 79

Concurrent Separation Logic

Resource invariant: another predicate. Describes the shared memory. Shared memory can only be accessed in atomic blocks. Assume invariant I holds. Ensure invariant I holds. Parallel composition. Modular reasoning.

xtension to CCRs is possible. judgment I ` [P] C [Q] , in a state where emp ` [P ⇤ I] C [Q ⇤ I] I ` [P] atomic{C} [Q]

(ATOM)

I ` [P1] C1 [Q1] I ` [P2] C2 [Q2] I ` [P1 ⇤ P2] C1 k C2 [Q1 ⇤ Q2]

(PAR)

slide-80
SLIDE 80

Back to the Stack: Formal Specification

Resource Invariant Specifications of Push and Pop I ,

  • 9u. S 7! u ⇤

~

0i<n α(i, u)

α(i, u) , 9a, c. C[i] 7!

r c ⇤ A[i] 7! r a ⇤ (c = 0 _ a = u _ ⌃)

I ` [A[tid] 7!

r

⇤ C[tid] 7!

r

⇤ ⌃n] push(v) [A[tid] 7!

r

⇤ C[tid] 7!

r

] ` 7! ⇤ 7! ⇤ 7! ⇤ 7! I ` [A[tid] 7!

r

⇤ C[tid] 7!

r

⇤ ⌃n] pop() [A[tid] 7!

r

⇤ C[tid] 7!

r

]

slide-81
SLIDE 81

Back to the Stack: Formal Specification

Resource Invariant Specifications of Push and Pop For every thread i: if i has read the stack pointer then it is either unchanged or their is a token available. I ,

  • 9u. S 7! u ⇤

~

0i<n α(i, u)

α(i, u) , 9a, c. C[i] 7!

r c ⇤ A[i] 7! r a ⇤ (c = 0 _ a = u _ ⌃)

I ` [A[tid] 7!

r

⇤ C[tid] 7!

r

⇤ ⌃n] push(v) [A[tid] 7!

r

⇤ C[tid] 7!

r

] ` 7! ⇤ 7! ⇤ 7! ⇤ 7! I ` [A[tid] 7!

r

⇤ C[tid] 7!

r

⇤ ⌃n] pop() [A[tid] 7!

r

⇤ C[tid] 7!

r

]

slide-82
SLIDE 82

Verifying Treiber’s Stack

While loop of push

S := alloc(1); [S] := 0; A := alloc(max_tid); C := alloc(max_tid); push(v) , pushed := false; x := alloc(2); [x] := v; [(pushed _ ⌃n) ⇤ γr(tid, _, _)] // loop invariant while ( !pushed ) do { //While rule antecedent: ((pushed _ ⌃n) ⇤ γr(tid, _, _)) ^ !pushed ) ⌃n1 ⇤ γr(tid, _, _) ⇤ ⌃ [⌃n1 ⇤ γr(tid, _, _)] atomic { [⌃n1 ⇤ γr(tid, _, _) ⇤ S 7! u ⇤ α(tid, u) ⇤ I0(tid, u)] // atom block [⌃n1 ⇤ γ(tid, _, _) ⇤ S 7! u ⇤ I0(tid, u)] // impl. & read perm. t := [S]; [⌃n1 ⇤ γ(tid, _, _) ⇤ (S 7! u ^ t = u) ⇤ I0(tid, u)] // read & frame C[tid] := 1 [⌃n1 ⇤ γ(tid, _, 1) ⇤ (S 7! u ^ t = u) ⇤ I0(tid, u)] // assignment A[tid] := t [⌃n1 ⇤ (A[tid] 7! t ^ t = u) ⇤ C[tid] 7! 1 ⇤ S 7! u ⇤ I0(tid, u)] [⌃n1 ⇤ γr(tid, t, 1) ⇤ S 7! u ⇤ α(tid, u) ⇤ I0(tid, u)] // perm. [⌃n1 ⇤ γr(tid, t, 1) ⇤ I] // exist. intro & (3) }; [⌃n1 ⇤ γr(tid, t, 1)] // atomic block & frame // [x+1] := t; this is not essential for lock-freedom atomic { [⌃n1 ⇤ γr(tid, t, 1) ⇤ I] // atomic block [⌃n1 ⇤ γr(tid, t, 1) ⇤ S 7! u ⇤ ~1in α(i, u)] // exist. elim. s := [S]; if s == t then { [⌃n1 ⇤ γ(tid, _, _) ⇤ S 7! t ⇤ ~{1,...,n}\{tid}(γ(i, _, _))] [S] := x; [γ(tid, _, _) ⇤ S 7! x ⇤ I0(tid, x)] // permissions & (4) pushed := true [(pushed _ ⌃n) ⇤ γ(tid, _, _) ⇤ 9u. S 7! u ⇤ I0(tid, u)] } else { [⌃n1 ⇤ t 6= u ^ γr(tid, t, 1) ⇤ α(tid, u) ⇤ S 7! u ⇤ I0(tid, u)] [⌃n ⇤ γ(tid, t, 1) ⇤ S 7! u ⇤ I0(tid, u)] // impl. using (5) skip [(pushed _ ⌃n) ⇤ γ(tid, _, _) ⇤ 9u. S 7! u ⇤ I0(tid, u)] }; C[tid] := 0 [(pushed _ ⌃n) ⇤ γ(tid, _, 0) ⇤ S 7! u ⇤ I0(tid, u)] // write & exist. elim (above) and permissions & impl. [(pushed _ ⌃n) ⇤ γr(tid, _, _) ⇤ α(tid, u) ⇤ S 7! u ⇤ I0(tid, u)] [(pushed _ ⌃n) ⇤ γr(tid, _, _) ⇤ I] // exist. intro }; [(pushed _ ⌃n) ⇤ γr(tid, _, _)] // atomic block end }

slide-83
SLIDE 83

Data Structure Tokens Per Operation Treiber’s Stack n Michael and Scott’s Queue n + 1 Hazard-Pointer Stack n + (c · n) Hazard-Pointer Queue (n + 1) + (c · n) Elimination-Backoff Stack n · (n + 1)

More Advanced Shared- Memory Data Structures

Details are in the Paper

slide-84
SLIDE 84

Conclusion and Ongoing Work

  • Compensation schemes simplify reasoning about lock-freedom
  • Quantitative reasoning can be directly integrated in modern logics for

safety properties

  • The reasoning works for involved real-world data structures
  • Current work: Adapt quantitative reasoning to prove termination-

sensitive contextual refinement

  • Future work: quantitative reasoning for classic lock-based

synchronization

slide-85
SLIDE 85

Conclusion and Ongoing Work

  • Compensation schemes simplify reasoning about lock-freedom
  • Quantitative reasoning can be directly integrated in modern logics for

safety properties

  • The reasoning works for involved real-world data structures
  • Current work: Adapt quantitative reasoning to prove termination-

sensitive contextual refinement

  • Future work: quantitative reasoning for classic lock-based

synchronization Thank you!