Concurrent Programming Languages and Semantic Analyses Manfred - - PowerPoint PPT Presentation

concurrent programming languages and semantic analyses
SMART_READER_LITE
LIVE PREVIEW

Concurrent Programming Languages and Semantic Analyses Manfred - - PowerPoint PPT Presentation

Concurrent Programming Languages and Semantic Analyses Manfred Schmidt-Schauss Goethe-Universit at Frankfurt, Institut f ur Informatik, Germany RTA/TLCA 14 16. July 2014 Based on joint work with David Sabel 1 Concurrency of Programming


slide-1
SLIDE 1

1

Concurrent Programming Languages and Semantic Analyses

Manfred Schmidt-Schauss

Goethe-Universit¨ at Frankfurt, Institut f¨ ur Informatik, Germany

RTA/TLCA 14

  • 16. July 2014

Based on joint work with David Sabel

slide-2
SLIDE 2

Concurrency of Programming and Languages

Computation Semantics/Correctness deterministic concurrent

random nondeterministic chaotic (Internet)

standard very complex non-standard impossible?

2/52

slide-3
SLIDE 3

Main Parts Diagrams and correctness of transformations Concurrency, non-determinism and contextual semantics Correctness of a concurrent implementation

slide-4
SLIDE 4

Main Parts Introduction Diagrams and correctness of transformations contextual equivalence, diagrams, correctness proofs, meta-rewriting sequences, automation LR (a deterministic calculus) Concurrency, non-determinism and contextual semantics may and should convergence and contextual equivalences, conservativity CHF (a concurrent calculus) Correctness of a concurrent implementation a complex real-world calculus: showing correctness using

  • perational methods

CSHF (concurrent implementation of software transactional memory)

slide-5
SLIDE 5

Alternative semantics approaches, also under concurrency denotational semantics translations into pi-calculus or other models simulation / bisimulation logical approaches

  • bservational semantics / contextual semantics
slide-6
SLIDE 6

Semantics Principles?

Question? Is there a best / standard semantics?

5/52

slide-7
SLIDE 7

Semantics Principles?

Question? Is there a best / standard semantics? Yes for deterministic programming languages

5/52

slide-8
SLIDE 8

Semantics Principles?

Question? Is there a best / standard semantics? Yes for deterministic programming languages No for non-deterministic and/or concurrent programming languages

5/52

slide-9
SLIDE 9

Semantics Principles?

Question? Is there a best / standard semantics? Yes for deterministic programming languages No for non-deterministic and/or concurrent programming languages But there are good choices

5/52

slide-10
SLIDE 10

Contextual Semantics of Programming Languages

e1 ≤ e2 iff ∀C : C[e1] ↓ = ⇒ C[e2] ↓ e1 ∼ e2 iff e1 ≤ e2 and e2 ≤ e1 Where: ei expressions resp. programs C contexts: programs with a hole e ↓ e reduces to a successful program reduction: a fixed-strategy-rewriting of programs. ≤ contextual approximation ∼ contextual equivalence Morris’ contextual equivalence (thesis, 1968)

6/52

slide-11
SLIDE 11

Contextual Semantics of Programming Languages

e1 ≤ may e2 iff ∀C : C[e1] ↓ may = ⇒ C[e2] ↓ may e1 ∼ may e2 iff e1 ≤ may e2 and e2 ≤ may e1 Where: ei expressions resp. programs C contexts: programs with a hole e ↓ may e may reduce to a successful program (may-convergence) reduction: a fixed-strategy-rewriting of programs. ≤ may contextual approximation ∼ may contextual equivalence Morris’ contextual equivalence (thesis, 1968)

6/52

slide-12
SLIDE 12

Examples

Q1: True ∼ False ?

7/52

slide-13
SLIDE 13

Examples

Q1: True ∼ False ? One context suffices: C[.] = if [.] then ⊥ else True

7/52

slide-14
SLIDE 14

Examples

Q1: True ∼ False ? One context suffices: C[.] = if [.] then ⊥ else True Q2: mapStandard ∼ mapWeird?

7/52

slide-15
SLIDE 15

Examples

Q1: True ∼ False ? One context suffices: C[.] = if [.] then ⊥ else True Q2: mapStandard ∼ mapWeird? TODO: check infinitely many programs P[.] whether P[mapStandard]↓ ⇐ ⇒ P[mapWeird]↓?

7/52

slide-16
SLIDE 16

Examples

Q1: True ∼ False ? One context suffices: C[.] = if [.] then ⊥ else True Q2: mapStandard ∼ mapWeird? TODO: check infinitely many programs P[.] whether P[mapStandard]↓ ⇐ ⇒ P[mapWeird]↓? Q3: λx.⊥ ∼ ⊥?

7/52

slide-17
SLIDE 17

Examples

Q1: True ∼ False ? One context suffices: C[.] = if [.] then ⊥ else True Q2: mapStandard ∼ mapWeird? TODO: check infinitely many programs P[.] whether P[mapStandard]↓ ⇐ ⇒ P[mapWeird]↓? Q3: λx.⊥ ∼ ⊥? No: (λx.⊥)↓, but ⊥ ↑

Abramsky: The lazy lambda calculus, 1990

7/52

slide-18
SLIDE 18

Remarks Remarks on Alternative Approaches

8/52

slide-19
SLIDE 19

Denotational Semantics

[ [.] ] : L → D adequate: [ [e1] ] = [ [e2] ] = ⇒ e1 ∼ e2

9/52

slide-20
SLIDE 20

Denotational Semantics

[ [.] ] : L → D adequate: [ [e1] ] = [ [e2] ] = ⇒ e1 ∼ e2 in general not fully abstract: e1 ∼ e2 but [ [e1] ] = [ [e2] ] is possible. (usual argument: “parallel-or” is available in denotation, but not the language.)

9/52

slide-21
SLIDE 21

A Connection to Confluence et.al.

Let → be the (compatible) reduction, i.e. permitted in all contexts. Let

s

− → be the (standard) reduction, i.e. under a strategy. Definition:

s

− → is standardizing, iff e ∗ − → success implies e

s,∗

− − → success.

10/52

slide-22
SLIDE 22

A Connection to Confluence et.al.

Let → be the (compatible) reduction, i.e. permitted in all contexts. Let

s

− → be the (standard) reduction, i.e. under a strategy. Definition:

s

− → is standardizing, iff e ∗ − → success implies e

s,∗

− − → success. Proposition If − → is confluent,

s

− → is standardizing, and {success} remains stable under reduction, then

← → is sound for contextual equivalence ∼.

10/52

slide-23
SLIDE 23

A Connection to Confluence et.al.

Let → be the (compatible) reduction, i.e. permitted in all contexts. Let

s

− → be the (standard) reduction, i.e. under a strategy. Definition:

s

− → is standardizing, iff e ∗ − → success implies e

s,∗

− − → success. Proposition If − → is confluent,

s

− → is standardizing, and {success} remains stable under reduction, then

← → is sound for contextual equivalence ∼. However In general

← → ⊂ ∼: (∼ is coarser than

← →. ) confluence = ⇒ determinism In general

s

− → is nonterminating.

10/52

slide-24
SLIDE 24

Diagrams and Correctness of Transformations

Calculus LR

slide-25
SLIDE 25

LR (core-language of Haskell) A pure (untyped) functional language with letrec, case, constructors, seq. call-by-need (deterministic) reduction. contextual equivalence based on may-convergence

slide-26
SLIDE 26

Calculus LR

Call-by-need reduction in LR (rules, a selection): (lbeta) (λx.e1) e2 → (letrec x = e2 in e1) (cp-in) (letrec x1 = vS, {xi = xi−1}m

i=2, Env in C[xV m])

→ (letrec x1 = v, {xi = xi−1}m

i=2, Env in C[v])

where v is an abstraction (llet) consists of two reduction rules: (llet-in)(letrec Env1 in (letrec Env2 in r)S) → (letrec Env1, Env2 in r) (llet-e) (letrec Env1, x = (letrec Env2 in sx)S in r) → (letrec Env1, Env2, x = sx in r)

S., Sch¨ utz, Sabel: Safety of N¨

  • cker’s strictness analysis. JFP 2008

13/52

slide-27
SLIDE 27

Context Lemmas

Context Lemma in LR If for all reduction contexts R: R[s]↓ = ⇒ R[t]↓, then s ≤may t. Where reduction contexts are contexts around the redexes; (under the normal-order reduction strategy)

14/52

slide-28
SLIDE 28

Context Lemmas

Context Lemma in LR If for all reduction contexts R: R[s]↓ = ⇒ R[t]↓, then s ≤may t. Where reduction contexts are contexts around the redexes; (under the normal-order reduction strategy) Context Lemma in LR; a weaker variant; better suited for computing diagrams in LR If for all surface contexts S: S[s]↓ = ⇒ S[t]↓, then s ≤may t Where surface contexts are contexts where the hole is not in an abstraction.

14/52

slide-29
SLIDE 29

Correctness Proofs using Diagrams

Forking diagrams for (llet) wrt. S-contexts; a complete set ·

iS,llet n,a

  • ·

n,a

  • ·

iS,llet ·

·

iS,llet n,a

  • ·

n,a

  • ·

·

iS,llet (n,lll)+

  • ·

(n,lll)+

  • ·

·

iS,llet (n,lll)+

  • ·

(n,lll)+

  • ·

iS,llet ·

·

iS,llet n,a

  • ·

n,a

  • ·

n,llet

·

15/52

slide-30
SLIDE 30

Correctness Proofs using Diagrams

Forking diagrams for (llet) Purpose: a proof of llet − − → ⊆ ≤. ·

iS,llet

  • n,a

·

n,a

  • ·iS,llet

·

·

iS,llet

  • n,a

·

n,a

  • ·

·

iS,llet

  • (n,lll)+
  • ·

(n,lll)+

  • ·

·

iS,llet

  • (n,lll)+
  • ·

(n,lll)+

  • ·iS,llet

·

·

iS,llet

  • n,a

·

n,a

  • ·

n,llet ·

Proof of e↓ ∧ e S, llet,∗ − − − − − → e′ = ⇒ e′↓:

16/52

slide-31
SLIDE 31

Correctness Proofs using Diagrams

Forking diagrams for (llet) Purpose: a proof of llet − − → ⊆ ≤. ·

iS,llet

  • n,a

·

n,a

  • ·iS,llet

·

·

iS,llet

  • n,a

·

n,a

  • ·

·

iS,llet

  • (n,lll)+
  • ·

(n,lll)+

  • ·

·

iS,llet

  • (n,lll)+
  • ·

(n,lll)+

  • ·iS,llet

·

·

iS,llet

  • n,a

·

n,a

  • ·

n,llet ·

Proof of e↓ ∧ e S, llet,∗ − − − − − → e′ = ⇒ e′↓: e

iS,llet n

e′ ·

n,a ·

. . . ·

n,a

eWHNF

16/52

slide-32
SLIDE 32

Correctness Proofs using Diagrams

Forking diagrams for (llet) Purpose: a proof of llet − − → ⊆ ≤. ·

iS,llet

  • n,a

·

n,a

  • ·iS,llet

·

·

iS,llet

  • n,a

·

n,a

  • ·

·

iS,llet

  • (n,lll)+
  • ·

(n,lll)+

  • ·

·

iS,llet

  • (n,lll)+
  • ·

(n,lll)+

  • ·iS,llet

·

·

iS,llet

  • n,a

·

n,a

  • ·

n,llet ·

Proof of e↓ ∧ e S, llet,∗ − − − − − → e′ = ⇒ e′↓: e

iS,llet n

e′ ·

n,a ·

. . . ·

n,a

eWHNF e

iS,llet n

e′

n

·

n,a iS,llet

·

· . . . ·

n,a

eWHNF

16/52

slide-33
SLIDE 33

Correctness Proofs using Diagrams

Forking diagrams for (llet) Purpose: a proof of llet − − → ⊆ ≤. ·

iS,llet

  • n,a

·

n,a

  • ·iS,llet

·

·

iS,llet

  • n,a

·

n,a

  • ·

·

iS,llet

  • (n,lll)+
  • ·

(n,lll)+

  • ·

·

iS,llet

  • (n,lll)+
  • ·

(n,lll)+

  • ·iS,llet

·

·

iS,llet

  • n,a

·

n,a

  • ·

n,llet ·

Proof of e↓ ∧ e S, llet,∗ − − − − − → e′ = ⇒ e′↓: e

iS,llet n

e′ ·

n,a ·

. . . ·

n,a

eWHNF e

iS,llet n

e′

n

·

n,a iS,llet

·

· . . . ·

n,a

eWHNF . . .

16/52

slide-34
SLIDE 34

Correctness Proofs using Diagrams

Forking diagrams for (llet) Purpose: a proof of llet − − → ⊆ ≤. ·

iS,llet

  • n,a

·

n,a

  • ·iS,llet

·

·

iS,llet

  • n,a

·

n,a

  • ·

·

iS,llet

  • (n,lll)+
  • ·

(n,lll)+

  • ·

·

iS,llet

  • (n,lll)+
  • ·

(n,lll)+

  • ·iS,llet

·

·

iS,llet

  • n,a

·

n,a

  • ·

n,llet ·

Proof of e↓ ∧ e S, llet,∗ − − − − − → e′ = ⇒ e′↓: e

iS,llet n

e′ ·

n,a ·

. . . ·

n,a

eWHNF e

iS,llet n

e′

n

·

n,a iS,llet

·

· . . . ·

n,a

eWHNF . . . e

iS,llet n

e′

n

·

n,a iS,llet

·

n

·

iS,llet

·

. . . . . . ·

n,a

·

n

eWHNF

e′

WHNF

16/52

slide-35
SLIDE 35

Correctness Proofs using Diagrams

Forking diagrams for (llet) Purpose: a proof of llet − − → ⊆ ≤. ·

iS,llet

  • n,a

·

n,a

  • ·iS,llet

·

·

iS,llet

  • n,a

·

n,a

  • ·

·

iS,llet

  • (n,lll)+
  • ·

(n,lll)+

  • ·

·

iS,llet

  • (n,lll)+
  • ·

(n,lll)+

  • ·iS,llet

·

·

iS,llet

  • n,a

·

n,a

  • ·

n,llet ·

Proof of e↓ ∧ e S, llet,∗ − − − − − → e′ = ⇒ e′↓: e

iS,llet n

e′ ·

n,a ·

. . . ·

n,a

eWHNF e

iS,llet n

e′

n

  • ·

n,a iS,llet

·

· . . . ·

n,a

eWHNF . . . e

iS,llet n

e′

n

  • ·

n,a iS,llet · n

  • ·

. . . ·

n,a

eWHNF

16/52

slide-36
SLIDE 36

Correctness Proofs using Diagrams

For the inverse direction e↓ ∧ e′ ∗ − → e = ⇒ e′↓, the method applies in a similar way: Commuting diagrams instead of forking diagrams.

17/52

slide-37
SLIDE 37

Correctness Proofs using Diagrams

Results A large set of correct program transformations Applied in

Niehren, Sabel, S., Schwinghammer: Observational Semantics for a Concurrent Lambda Calculus with Reference Cells and Futures. ENTCS 2007 Sabel, S.: A call-by-need lambda calculus with locally bottom-avoiding choice. . . . MSCS 2008 Sabel, S.: A contextual semantics for concurrent Haskell with futures. PPDP 2011 18/52

slide-38
SLIDE 38

Correctness Proofs using Diagrams

Results A large set of correct program transformations Several length measures of standard reductions (complexity of evaluations) and transformations that improve the complexity(ies). Applied in

Niehren, Sabel, S., Schwinghammer: Observational Semantics for a Concurrent Lambda Calculus with Reference Cells and Futures. ENTCS 2007 Sabel, S.: A call-by-need lambda calculus with locally bottom-avoiding choice. . . . MSCS 2008 Sabel, S.: A contextual semantics for concurrent Haskell with futures. PPDP 2011 18/52

slide-39
SLIDE 39

Correctness Proofs by Termination

If the diagrams are known, and a correctness proof is required, then: The diagrams can be interpreted as meta-rewriting rules on reduction sequences consisting of standard reductions and transformations. The meta-irreducible reduction sequences are the standard reduction sequences.

(Rau, Sabel, S., IJCAR 2012)

19/52

slide-40
SLIDE 40

Correctness Proofs by Termination

If the diagrams are known, and a correctness proof is required, then: The diagrams can be interpreted as meta-rewriting rules on reduction sequences consisting of standard reductions and transformations. The meta-irreducible reduction sequences are the standard reduction sequences. It is sufficient to prove termination of the meta-rewriting system! (which was done (in LR) using termination provers

AProVE (RWTH -Aachen) and IsaFoR / CeTA (University Innsbruck) ) (Rau, Sabel, S., IJCAR 2012)

19/52

slide-41
SLIDE 41

Correctness Proofs using Diagrams

Issue: Computation of Diagrams Similar to computing critical pairs (ala Knuth-Bendix) Some extra complications:

higher-order and scoping asymmetry due to reduction strategies equational theories involved (in the letrec construct of Haskell) . . .

20/52

slide-42
SLIDE 42

Correctness Proofs using Diagrams

Issue: Computation of Diagrams Similar to computing critical pairs (ala Knuth-Bendix) Some extra complications:

higher-order and scoping asymmetry due to reduction strategies equational theories involved (in the letrec construct of Haskell) . . .

Automatic Computation of Diagrams (C. Rau, thesis in preparation): Computing a complete set of forking diagrams for all reduction rules is decidable in LR (letrec, case, constructors,..) finitely many overlaps are sufficient. Method: applying nominal unification techniques to LR

20/52

slide-43
SLIDE 43

Concurrency, Non-Determinism and Contextual Semantics Concurrency, Non-Determinism and Contextual Semantics

21/52

slide-44
SLIDE 44

Concurrency, Non-Determinism and Contextual Semantics

May-Semantics: is perfect for a deterministic setting

22/52

slide-45
SLIDE 45

Concurrency, Non-Determinism and Contextual Semantics

May-Semantics: is perfect for a deterministic setting But: has too low discriminating power for concurrent/nondeterministic evaluation P

  • P1

(a value) P ′

  • P1

(a value) P2 (a deadlock)

22/52

slide-46
SLIDE 46

Concurrency, Non-Determinism and Contextual Semantics

May-Semantics: is perfect for a deterministic setting But: has too low discriminating power for concurrent/nondeterministic evaluation P

  • P1

(a value) P ′

  • P1

(a value) P2 (a deadlock) P ∼ P ′ by using only may-convergence

22/52

slide-47
SLIDE 47

Contextual Semantics in Concurrency

e1 ≤may e2 iff ∀C : C[e1] ↓may = ⇒ C[e2] ↓may e1 ∼may e2 iff e1 ≤may e2 and e2 ≤may e1 e1 ≤should e2 iff ∀C : C[e1] ↓should = ⇒ C[e2] ↓should e1 ∼should e2 iff e1 ≤should e2 and e2 ≤should e1 e1 ∼ e2 iff e1 ∼may e2 and e1 ∼should e2 Where: e ↓should is defined as: ∀e′ : e

− − → e′ = ⇒ e′ ↓

23/52

slide-48
SLIDE 48

Must Convergence

Proposed by other researchers: may and must-convergence: e1 ≤must e2 iff ∀C : C[e1] ↓must = ⇒ C[e2] ↓must e1 ∼must e2 iff e1 ≤must e2 and e2 ≤must e1 Where: e ↓must is defined as: every reduction sequence from e is successful (and terminating).

24/52

slide-49
SLIDE 49

May, Should, Must-Convergence and Invariances

invariant properties: Prop(e) ∧ e ∼ e′ = ⇒ Prop(e′): invariances of ∼

may may, should may, must may, should, must

∃ value Y Y Y Y

no error possible

N Y N Y

no infinite reductions

N N Y Y Discussion more invariances mean less program transformations; fair evaluation must be explicitly required for ∼must; fair evaluation does not change ∼should; test-case: are “busy-wait-like”-implementations equivalent to buffer-implementations?

25/52

slide-50
SLIDE 50

May, Should, Must-Convergence and Invariances

invariant properties: Prop(e) ∧ e ∼ e′ = ⇒ Prop(e′): invariances of ∼

may may, should may, must may, should, must

∃ value Y Y Y Y

no error possible

N Y N Y

no infinite reductions

N N Y Y Proposal Use ∼may and ∼should

25/52

slide-51
SLIDE 51

Concurrent Haskell with Futures (CHF): Concurrent Haskell with Futures (CHF) Semantic analyses using contextual semantics

26/52

slide-52
SLIDE 52

Concurrent Haskell with Futures (CHF):

Concurrent Haskell (Peyton Jones, Gordon, Finne 1996) extends Haskell by concurrency The process calculus CHF (Sabel, S. CHF. . . , 2011) models Concurrent Haskell with Futures

  • perational semantics inspired by (Peyton Jones, 2001)

Future = variable whose value is computed concurrently by a monadic computation futures allow implicit synchronization by data dependency Concurrent Haskell + unsafeInterleaveIO can encode CHF

(CHF is a sublanguage of Concurrent Haskell + unsafeInterleaveIO)

27/52

slide-53
SLIDE 53

x main ⇐ = = takeMVar a | z ⇐ putMVar a ((length u) − 3) > > takeMVar b

| y ⇐ putMVar a 1 |u = [1, 2, 3, 4, 5] | a m − | b m 3

An MVar is a one-place buffer, which may be empty or filled. takeMVar a empties the MVar with address a putMVar a fills the (empty) MVar with address a

slide-54
SLIDE 54

x main ⇐ = = takeMVar a | z ⇐ putMVar a ((length u) − 3) > > takeMVar b

| y ⇐ putMVar a 1 |u = [1, 2, 3, 4, 5] | a m − | b m 3

An MVar is a one-place buffer, which may be empty or filled. takeMVar a empties the MVar with address a putMVar a fills the (empty) MVar with address a

slide-55
SLIDE 55

x main ⇐ = = takeMVar a | z ⇐ putMVar a ((length u) − 3) > > takeMVar b

| y ⇐ putMVar a 1 |u = [1, 2, 3, 4, 5] | a m − | b m 3

− → x main ⇐ = = takeMVar a | z ⇐ takeMVar b | y ⇐ putMVar a 1 | u = [1, 2, 3, 4, 5] | a m 2 | b m 3

An MVar is a one-place buffer, which may be empty or filled. takeMVar a empties the MVar with address a putMVar a fills the (empty) MVar with address a

slide-56
SLIDE 56

x main ⇐ = = takeMVar a | z ⇐ putMVar a ((length u) − 3) > > takeMVar b

| y ⇐ putMVar a 1 |u = [1, 2, 3, 4, 5] | a m − | b m 3

− → x main ⇐ = = takeMVar a | z ⇐ takeMVar b | y ⇐ putMVar a 1 | u = [1, 2, 3, 4, 5] | a m 2 | b m 3

An MVar is a one-place buffer, which may be empty or filled. takeMVar a empties the MVar with address a putMVar a fills the (empty) MVar with address a

slide-57
SLIDE 57

x main ⇐ = = takeMVar a | z ⇐ putMVar a ((length u) − 3) > > takeMVar b

| y ⇐ putMVar a 1 |u = [1, 2, 3, 4, 5] | a m − | b m 3

− → x main ⇐ = = takeMVar a | z ⇐ takeMVar b | y ⇐ putMVar a 1 | u = [1, 2, 3, 4, 5] | a m 2 | b m 3

− → x main ⇐ = = takeMVar a | z = 3 | y ⇐ putMVar a 1 | u = [1, 2, 3, 4, 5] | a m 2 | b m −

An MVar is a one-place buffer, which may be empty or filled. takeMVar a empties the MVar with address a putMVar a fills the (empty) MVar with address a

slide-58
SLIDE 58

x main ⇐ = = takeMVar a | z ⇐ putMVar a ((length u) − 3) > > takeMVar b

| y ⇐ putMVar a 1 |u = [1, 2, 3, 4, 5] | a m − | b m 3

− → x main ⇐ = = takeMVar a | z ⇐ takeMVar b | y ⇐ putMVar a 1 | u = [1, 2, 3, 4, 5] | a m 2 | b m 3

− → x main ⇐ = = takeMVar a | z = 3 | y ⇐ putMVar a 1 | u = [1, 2, 3, 4, 5] | a m 2 | b m −

An MVar is a one-place buffer, which may be empty or filled. takeMVar a empties the MVar with address a putMVar a fills the (empty) MVar with address a

slide-59
SLIDE 59

x main ⇐ = = takeMVar a | z ⇐ putMVar a ((length u) − 3) > > takeMVar b

| y ⇐ putMVar a 1 |u = [1, 2, 3, 4, 5] | a m − | b m 3

− → x main ⇐ = = takeMVar a | z ⇐ takeMVar b | y ⇐ putMVar a 1 | u = [1, 2, 3, 4, 5] | a m 2 | b m 3

− → x main ⇐ = = takeMVar a | z = 3 | y ⇐ putMVar a 1 | u = [1, 2, 3, 4, 5] | a m 2 | b m −

− → x main ⇐ = = return 2 | z = 3 | y ⇐ putMVar a 1 | u = [1, 2, 3, 4, 5] | a m − | b m −

An MVar is a one-place buffer, which may be empty or filled. takeMVar a empties the MVar with address a putMVar a fills the (empty) MVar with address a

slide-60
SLIDE 60

The Process Calculus CHF

Processes: A list of components: P1 | P2 | . . . | Pn P, Pi ∈ Proc ::= P1|P2 | νx.P | x ⇐ e

future x

| x = e

binding

| x m e | x m −

  • filled & empty MVar

A process has a main thread: x main ⇐ = = e|P Expressions e, ei ∈ ExprCHF ::= x | λx.e | (e1 e2) | seq e1 e2 | c e1 . . . eα(c) | caseT e of . . . (cT,i x1 . . . xα(cT,i) → ei) . . . | letrec x1 = e1 . . . xn = en in e | return e | e1 > >= e2 | future e | takeMVar e | newMVar e | putMVar e1 e2 Types: standard monomorphic type system

29/52

slide-61
SLIDE 61

Operational Semantics

Operational Semantics: Reduction P1

CHF

− − → P2 Small-step reduction

CHF

− − → (call-by-name variant) It is known that call-by-name and call-by-need are equivalent w.r.t. ∼.

(Sabel, S. CHF. . . , 2011)

Rules are closed w.r.t. structural congruence and process contexts Reduction rules for monadic computation and functional evaluation Examples for reduction rules: monadic: (fork) x ⇐ M[future e]

CHF

− − → νy.(x ⇐ M[return y]|y ⇐ e), y fresh functional: (beta) y ⇐ M[F[((λx.e1) e2)]]

CHF

− − → y ⇐ M[F[e1[e2/x]]]

evaluation contexts: E; forcing contexts: F; monadic contexts: M;

30/52

slide-62
SLIDE 62

CHF Non-determinism

CHF-Reduction is non-deterministic: x ⇐ putMVar a 0 | y ⇐ putMVar a 1 | a m −

CHF

  • CHF
  • x ⇐ return unit

| y ⇐ putMVar a 1 | a m 0 x ⇐ putMVar a 0 |y ⇐ return unit | a m 1

31/52

slide-63
SLIDE 63

CHF: Successful Processes and Convergence

Success A process x main ⇐ = = return e|P is called successful Note: P may be reducible. . .

May-convergence P↓may holds, whenever P

CHF,∗

− − − − → Psuccess. Should-convergence P↓should holds, whenever P

CHF,∗

− − − − → P ′ = ⇒ P ′↓may. P1 ≤CHF,may P2, iff for all process contexts D : D[P1]↓ = ⇒ D[P2]↓ P1 ≤CHF,should P2, iff for all D: D[P1] ↓should = ⇒ D[P2] ↓should P1 ∼ P2, iff P1 ∼CHF,may P2, and P1 ∼CHF,should P2.

32/52

slide-64
SLIDE 64

CHF: Results

Theorem: Every functional reduction is correct: P1

CHF,functional

− − − − − − − − − − → P2 implies P1 ∼ P2.

examples: beta-reduction, case-reduction, seq-reduction

Theorem: The monadic reductions (as standard reduction) with the exception of putMVar and takeMVar are correct Theorem: The monad laws for > >= are correct, provided (seq e1 e2) is only used for forcing functional expressions e1.

Sabel, S.: A Contextual Semantics for CHF, PPDP 2011

33/52

slide-65
SLIDE 65

CHF Conservativity

CHF Conservativity Theorem Embedding the pure functional part of CHF into full CHF is conservative Comparing ∼CHF and ∼pure. Consequence of CHF Conservativity: Correct transformations (optimizations) in the pure functional part remain correct in CHF.

Sabel, S.: Conservative Concurrency in Haskell; LICS 2012

34/52

slide-66
SLIDE 66

Overview of the Conservativity Proof

finite syntax infinite trees CHF e1 ∼c,CHF e2 e1 ∼c,PF e2 PF CHFI IT(e1) ∼c,CHFI IT(e2) PFMI PFI IT(e1) ∼c,PFI IT(e2) IT(e1) ∼b,PFI IT(e2) IT(e1) ∼b,PFMI IT(e2) IT IT

35/52

slide-67
SLIDE 67

Non-Conservativity Results

Let CHFL = CHF + lazy futures lazy future = concurrent computation starts only if the value is demanded CHFL is not a conservative extension of PF Counterexample: seq e2 (seq e1 e2) ∼PF (seq e1 e2) But: seq e2 (seq e1 e2) ∼CHFL (seq e1 e2) Since lazy futures are encodable with unsafeInterleaveIO: CHF+unsafeInterleaveIO is also not a conservative extension of PF

36/52

slide-68
SLIDE 68

Methods Used Forking and commutation diagrams. Context Lemmas for may- and should-convergence. Unfolding letrec into infinitary expressions and contextual equivalence Equivalence of call-by-need and call-by-name using infinitary expressions. Proving properties of translations (adequacy) and embeddings (full abstractness) (S., Niehren, Schwinghammer, Sabel: IFIP TCS

2008

Soundness and completeness of applicative similarity (for may-convergence) for infinitary pure expressions, using Howe’s method.

slide-69
SLIDE 69

Proving Correctness of a Concurrent Implementation of Software Transactional Memory

slide-70
SLIDE 70

Two program calculi for STM Haskell:

(S., Sabel, ICFP 2013)

SHF Specification CSHF Concurrent Implementation translation ψ (implementation)

slide-71
SLIDE 71

Two program calculi for STM Haskell:

(S., Sabel, ICFP 2013)

SHF Specification CSHF Concurrent Implementation translation ψ (implementation) Definition of Correctness: The implementation fulfills the specification P ↓may ⇐ ⇒ ψ(P) ↓may and P ↓sh ⇐ ⇒ ψ(P) ↓sh means: CSHF is a correct evaluator of SHF.

slide-72
SLIDE 72

Two program calculi for STM Haskell:

(S., Sabel, ICFP 2013)

SHF Specification CSHF Concurrent Implementation translation ψ (implementation) Definition of Correctness: The implementation fulfills the specification P ↓may ⇐ ⇒ ψ(P) ↓may and P ↓sh ⇐ ⇒ ψ(P) ↓sh means: CSHF is a correct evaluator of SHF. more general and more abstract: ψ is semantics reflecting: ψ(e1) ≤ ψ(e2) = ⇒ e1 ≤ e2

slide-73
SLIDE 73

Transactional Memory

Software Transactional Memory (STM) treats shared memory operations as transactions provides lock-free and very convenient concurrent programming requires an implementation that correctly executes the transactions

40/52

slide-74
SLIDE 74

STM Haskell STM library for Haskell introduced by Harris et.al, PPoPP’05 uses Haskell’s strong type system to distinguish between IO-computations, (IO-monad) software transactions, and (STM-monad) functional code

slide-75
SLIDE 75

The STM Haskell API

Transactional Variables:

TVar a

Primitives to form STM-transactions STM a:

newTVar readTVar writeTVar retry e

  • rElse e e’

Executing an STM-transaction:

atomically e

Semantics: the transaction-execution is atomic: all or nothing, effects are indivisible, and isolated: concurrent evaluation is not observable

42/52

slide-76
SLIDE 76

Specification: Process Calculus SHF SHF is a process calculus similar to CHF top-level: processes, futures and the TVars second level: are transactions (STM-monad) third level: the functional evaluation: extended lambda calculus with case, constructors, letrec

slide-77
SLIDE 77

Specification: Process Calculus SHF

Operational Semantics: Call-by-need “small-step” reduction SHF − − − →,. . . Big-step rule for transactional evaluation:

D1[u≀y ⇐ M[atomically e]]

SHFA,∗

− − − − − → D′

1[u≀y ⇐ M[atomically (returnSTM e′)]]

D[u≀y ⇐ M[atomically e]]

SHF

− − − → D′[u≀y ⇐ M[returnIO e′]] where SHFA − − − → are small-step rules for transactional evaluation

Informally: If the transaction can be completely executed while all

  • ther threads are stopped, then execute it.

Enforces sequential and atomic evaluation of transactions in the specification calculus SHF ⇒ atomicity and isolation

  • bviously hold

44/52

slide-78
SLIDE 78

Specification: Process Calculus SHF

Operational Semantics: Call-by-need “small-step” reduction SHF − − − →,. . . Big-step rule for transactional evaluation:

D1[u≀y ⇐ M[atomically e]]

SHFA,∗

− − − − − → D′

1[u≀y ⇐ M[atomically (returnSTM e′)]]

D[u≀y ⇐ M[atomically e]]

SHF

− − − → D′[u≀y ⇐ M[returnIO e′]] where SHFA − − − → are small-step rules for transactional evaluation

Informally: If the transaction can be completely executed while all

  • ther threads are stopped, then execute it.

Enforces sequential and atomic evaluation of transactions in the specification calculus SHF ⇒ atomicity and isolation

  • bviously hold

Whether a transaction can be executed is undecidable in the SHF-operational semantics.

44/52

slide-79
SLIDE 79

Implementation: Calculus CSHF

Concurrent Implementation: Calculus CSHF Extensions w.r.t. SHF: local copies of the used global TVars: Bookkeeping per thread of read and written TVars. . . is a stack due to nested orElse-s Bookkeeping of potentially conflicting threads (at the TVars)

45/52

slide-80
SLIDE 80

A CSHF-example rule: Read-operation (readg): A read first looks into the local store. If no local TVar exists, then the global value is copied into the local store and the own thread identifier is added to the notify-list of the global TVar. (readg): u≀y

T,L

⇐ = = MSTM[readTVar x] | x tg e1 − g | u tls r : s

CSHF

− − − − → νz.(u≀y

T ′,L′

⇐ = = = MSTM[returnSTM z] | z = e1 | u tls ({x tl z}

·

∪ r) : s | x tg z − g′) if x ∈ La where L = (La, Ln, Lw) : Lr, L′ = (La ∪ {x}, Ln, Lw) : Lr, T ′ = T ∪ {x} and g′ = ({u} ∪ g)

slide-81
SLIDE 81

Implementation: Calculus CSHF

Operational semantics: true small-step reduction CSHF − − − − → concurrent evaluation of threads and also concurrent evaluation of STM transactions all rule applications are decidable Transaction execution (informally): all writes are performed on local TVars The read and written TVars (only the names) are logged Bookkeeping of notify list of threads at TVars. Commit phase

1 lock (relevant) global TVars 2 send a retry to all threads in the notify lists

  • f to-be-written TVars (= conflicting threads)

3 write content of local TVars into global TVars 4 remove the locks 47/52

slide-82
SLIDE 82

Correctness of the Implementation

SHF ∼SHF CSHF ∼CSHF translation ψ

Main Theorem Convergence Equivalence: For any SHF-process P: P ↓SHF ⇐ ⇒ ψ(P)↓CSHF and P ⇓SHF ⇐ ⇒ ψ(P)⇓CSHF Adequacy: For all P1, P2 ∈ SHF: ψ(P1) ∼CSHF ψ(P2) = ⇒ P1 ∼SHF P2

48/52

slide-83
SLIDE 83

Proof methods commutation/non-commutation of reduction steps in reduction sequences. This requires four partial proofs for convergence equivalence. Analyzing and exploiting properties of translations: compositionality and convergence equivalence.

slide-84
SLIDE 84

Proof methods commutation/non-commutation of reduction steps in reduction sequences. This requires four partial proofs for convergence equivalence.

p ↓ = ⇒ Ψ(p) ↓ Ψ(p) ↓ = ⇒ p ↓ p ↑ = ⇒ Ψ(p) ↑ (Base-case already covered) Ψ(p) ↑ = ⇒ p ↑

Analyzing and exploiting properties of translations: compositionality and convergence equivalence.

slide-85
SLIDE 85

Consequences for the Implementation and Comments

Consequences of Correctness ⇒ CSHF is a correct evaluator for SHF ⇒ Correct program transformations in CSHF are also correct for SHF Consequences of correctness and of the proof Every (successful) reduction sequence in the specification SHF is also possible in the implementation CSHF . Every (successful) reduction sequence in the implementation CSHF can be retranslated into a successful reduction sequence in the specification calculus SHF. A progress property of the implementation: success of at least one of several conflicting transactions

50/52

slide-86
SLIDE 86

Conclusion

Contextual semantics can be applied to deterministic as well as non-deterministic and also concurrent programming languages. Requirement is only: syntax of expressions and contexts, an

  • perational semantics, and definition of values.

The (theoretical and practical) tools have increased their power: context lemmata, applicative (bi)simulations, diagrams, translations, combinations of may and should Rewriting techniques can be applied to small-step operational semantics Large examples are within reach of the methods of contextual semantics (cf. STM correctness). A drawback: Reasoning is tedious and often too syntactical

51/52

slide-87
SLIDE 87

Future Work

Future work; more work required: Polymorphic typing and contextual equivalence More applicative (bi-)simulations also for concurrency (wrt. should-convergence) Complexity of reduction sequences Deeper analysis of translations Invariances of contextual preorder and equivalences Automating the operational reasoning . . .

52/52