Dialectical refinement: Rescuing programming from the logicians - - PowerPoint PPT Presentation

dialectical refinement rescuing programming from the
SMART_READER_LITE
LIVE PREVIEW

Dialectical refinement: Rescuing programming from the logicians - - PowerPoint PPT Presentation

Dialectical refinement: Rescuing programming from the logicians Richard Bornat Professor of Computer Programming School of Engineering and Information Sciences Middlesex University 5th November 2008 1 There are two programming problems 2


slide-1
SLIDE 1

Dialectical refinement: Rescuing programming from the logicians

Richard Bornat Professor of Computer Programming School of Engineering and Information Sciences Middlesex University 5th November 2008

1

slide-2
SLIDE 2

There are two programming problems

2

slide-3
SLIDE 3

There are two programming problems

◮ The novice programming problem

2

slide-4
SLIDE 4

There are two programming problems

◮ The novice programming problem ◮ The expert programming problem

2

slide-5
SLIDE 5

There are two programming problems

◮ The novice programming problem ◮ The expert programming problem

Both are interesting.

2

slide-6
SLIDE 6

There are two programming problems

◮ The novice programming problem ◮ The expert programming problem

Both are interesting. This talk is about the second problem.

2

slide-7
SLIDE 7

There are two programming problems

◮ The novice programming problem ◮ The expert programming problem

Both are interesting. This talk is about the second problem. Most programming ‘languages’, especially including Java, address the expert programming problem.

2

slide-8
SLIDE 8

Programming as a creative act

3

slide-9
SLIDE 9

Programming as a creative act

I take ‘programming’ to be the whole business: conception, design, construction, verification, testing . . .

3

slide-10
SLIDE 10

Programming as a creative act

I take ‘programming’ to be the whole business: conception, design, construction, verification, testing . . . I’m very interested in ‘correct’ programs: ones with specification and verification.

3

slide-11
SLIDE 11

Programming as a creative act

I take ‘programming’ to be the whole business: conception, design, construction, verification, testing . . . I’m very interested in ‘correct’ programs: ones with specification and verification. And I’m interested in inventing correct programs.

3

slide-12
SLIDE 12

Programming as a creative act

I take ‘programming’ to be the whole business: conception, design, construction, verification, testing . . . I’m very interested in ‘correct’ programs: ones with specification and verification. And I’m interested in inventing correct programs. (I claim that) invention always involves trial and error.

3

slide-13
SLIDE 13

Programming as a creative act

I take ‘programming’ to be the whole business: conception, design, construction, verification, testing . . . I’m very interested in ‘correct’ programs: ones with specification and verification. And I’m interested in inventing correct programs. (I claim that) invention always involves trial and error. Can trial and error be ‘logical’? Or is it illogical guesswork?

3

slide-14
SLIDE 14

Two ways to program

4

slide-15
SLIDE 15

Two ways to program

I can invent a program, in a blinding flash of creativity

4

slide-16
SLIDE 16

Two ways to program

I can invent a program, in a blinding flash of creativity; and then verify it

4

slide-17
SLIDE 17

Two ways to program

I can invent a program, in a blinding flash of creativity; and then verify it (of course I wouldn’t test it . . . )

4

slide-18
SLIDE 18

Two ways to program

I can invent a program, in a blinding flash of creativity; and then verify it (of course I wouldn’t test it . . . ). Logicians call this ad-hoc programming.

4

slide-19
SLIDE 19

Two ways to program

I can invent a program, in a blinding flash of creativity; and then verify it (of course I wouldn’t test it . . . ). Logicians call this ad-hoc programming. Before the verification I have to think up a logical specification. I don’t need the specification to start programming.

4

slide-20
SLIDE 20

Two ways to program

I can invent a program, in a blinding flash of creativity; and then verify it (of course I wouldn’t test it . . . ). Logicians call this ad-hoc programming. Before the verification I have to think up a logical specification. I don’t need the specification to start programming. Or do I? . . .

4

slide-21
SLIDE 21

Two ways to program

I can invent a program, in a blinding flash of creativity; and then verify it (of course I wouldn’t test it . . . ). Logicians call this ad-hoc programming. Before the verification I have to think up a logical specification. I don’t need the specification to start programming. Or do I? . . . Logicians would like us to start with the specification

4

slide-22
SLIDE 22

Two ways to program

I can invent a program, in a blinding flash of creativity; and then verify it (of course I wouldn’t test it . . . ). Logicians call this ad-hoc programming. Before the verification I have to think up a logical specification. I don’t need the specification to start programming. Or do I? . . . Logicians would like us to start with the specification; write a magical (unimplementable) program which satisfies the specification

4

slide-23
SLIDE 23

Two ways to program

I can invent a program, in a blinding flash of creativity; and then verify it (of course I wouldn’t test it . . . ). Logicians call this ad-hoc programming. Before the verification I have to think up a logical specification. I don’t need the specification to start programming. Or do I? . . . Logicians would like us to start with the specification; write a magical (unimplementable) program which satisfies the specification; then refine the magical program step-by-step towards something that can be used

4

slide-24
SLIDE 24

Two ways to program

I can invent a program, in a blinding flash of creativity; and then verify it (of course I wouldn’t test it . . . ). Logicians call this ad-hoc programming. Before the verification I have to think up a logical specification. I don’t need the specification to start programming. Or do I? . . . Logicians would like us to start with the specification; write a magical (unimplementable) program which satisfies the specification; then refine the magical program step-by-step towards something that can be used; using refinement steps which preserve the property that the program satisfies the specification.

4

slide-25
SLIDE 25

Two ways to program

I can invent a program, in a blinding flash of creativity; and then verify it (of course I wouldn’t test it . . . ). Logicians call this ad-hoc programming. Before the verification I have to think up a logical specification. I don’t need the specification to start programming. Or do I? . . . Logicians would like us to start with the specification; write a magical (unimplementable) program which satisfies the specification; then refine the magical program step-by-step towards something that can be used; using refinement steps which preserve the property that the program satisfies the specification. The logicians’ way of programming produces a result that is correct by construction.

4

slide-26
SLIDE 26

Two ways to program

I can invent a program, in a blinding flash of creativity; and then verify it (of course I wouldn’t test it . . . ). Logicians call this ad-hoc programming. Before the verification I have to think up a logical specification. I don’t need the specification to start programming. Or do I? . . . Logicians would like us to start with the specification; write a magical (unimplementable) program which satisfies the specification; then refine the magical program step-by-step towards something that can be used; using refinement steps which preserve the property that the program satisfies the specification. The logicians’ way of programming produces a result that is correct by construction. Ad-hoc programs are rarely specified, even more rarely verified.

4

slide-27
SLIDE 27

Two ways to program

I can invent a program, in a blinding flash of creativity; and then verify it (of course I wouldn’t test it . . . ). Logicians call this ad-hoc programming. Before the verification I have to think up a logical specification. I don’t need the specification to start programming. Or do I? . . . Logicians would like us to start with the specification; write a magical (unimplementable) program which satisfies the specification; then refine the magical program step-by-step towards something that can be used; using refinement steps which preserve the property that the program satisfies the specification. The logicians’ way of programming produces a result that is correct by construction. Ad-hoc programs are rarely specified, even more rarely verified. Can we defend program invention?

4

slide-28
SLIDE 28

Two ways to program

I can invent a program, in a blinding flash of creativity; and then verify it (of course I wouldn’t test it . . . ). Logicians call this ad-hoc programming. Before the verification I have to think up a logical specification. I don’t need the specification to start programming. Or do I? . . . Logicians would like us to start with the specification; write a magical (unimplementable) program which satisfies the specification; then refine the magical program step-by-step towards something that can be used; using refinement steps which preserve the property that the program satisfies the specification. The logicians’ way of programming produces a result that is correct by construction. Ad-hoc programs are rarely specified, even more rarely verified. Can we defend program invention? Can we give it a nicer name?

4

slide-29
SLIDE 29

Dialectical programming

5

slide-30
SLIDE 30

Dialectical programming

Lakatos, in “Proofs and Refutations”, shows the evolution over time

  • f Euler’s conjecture about polyhedral solids

Vertices − Edges + Faces = 2 (e.g for a cube V = 8, E = 12, F = 6).

5

slide-31
SLIDE 31

Dialectical programming

Lakatos, in “Proofs and Refutations”, shows the evolution over time

  • f Euler’s conjecture about polyhedral solids

Vertices − Edges + Faces = 2 (e.g for a cube V = 8, E = 12, F = 6). The conjecture leads to a proof

5

slide-32
SLIDE 32

Dialectical programming

Lakatos, in “Proofs and Refutations”, shows the evolution over time

  • f Euler’s conjecture about polyhedral solids

Vertices − Edges + Faces = 2 (e.g for a cube V = 8, E = 12, F = 6). The conjecture leads to a proof which is challenged by a counter-example (e.g. a polyhedron with a tunnel, a polyhedron with a central cavity)

5

slide-33
SLIDE 33

Dialectical programming

Lakatos, in “Proofs and Refutations”, shows the evolution over time

  • f Euler’s conjecture about polyhedral solids

Vertices − Edges + Faces = 2 (e.g for a cube V = 8, E = 12, F = 6). The conjecture leads to a proof which is challenged by a counter-example (e.g. a polyhedron with a tunnel, a polyhedron with a central cavity) which leads to a refinement of the conjecture, a new proof, a deeper understanding.

5

slide-34
SLIDE 34

Dialectical programming

Lakatos, in “Proofs and Refutations”, shows the evolution over time

  • f Euler’s conjecture about polyhedral solids

Vertices − Edges + Faces = 2 (e.g for a cube V = 8, E = 12, F = 6). The conjecture leads to a proof which is challenged by a counter-example (e.g. a polyhedron with a tunnel, a polyhedron with a central cavity) which leads to a refinement of the conjecture, a new proof, a deeper understanding. This is intentionally reminiscent of Hegel’s dialectic “Thesis plus antithesis yields synthesis”.

5

slide-35
SLIDE 35

Dialectical programming

Lakatos, in “Proofs and Refutations”, shows the evolution over time

  • f Euler’s conjecture about polyhedral solids

Vertices − Edges + Faces = 2 (e.g for a cube V = 8, E = 12, F = 6). The conjecture leads to a proof which is challenged by a counter-example (e.g. a polyhedron with a tunnel, a polyhedron with a central cavity) which leads to a refinement of the conjecture, a new proof, a deeper understanding. This is intentionally reminiscent of Hegel’s dialectic “Thesis plus antithesis yields synthesis”. (But we are simple people, we won’t bother with Hegel.)

5

slide-36
SLIDE 36

Dialectical programming

Lakatos, in “Proofs and Refutations”, shows the evolution over time

  • f Euler’s conjecture about polyhedral solids

Vertices − Edges + Faces = 2 (e.g for a cube V = 8, E = 12, F = 6). The conjecture leads to a proof which is challenged by a counter-example (e.g. a polyhedron with a tunnel, a polyhedron with a central cavity) which leads to a refinement of the conjecture, a new proof, a deeper understanding. This is intentionally reminiscent of Hegel’s dialectic “Thesis plus antithesis yields synthesis”. (But we are simple people, we won’t bother with Hegel.) Can the dialectic reach places that logical refinement cannot?

5

slide-37
SLIDE 37

Dialectical programming

Lakatos, in “Proofs and Refutations”, shows the evolution over time

  • f Euler’s conjecture about polyhedral solids

Vertices − Edges + Faces = 2 (e.g for a cube V = 8, E = 12, F = 6). The conjecture leads to a proof which is challenged by a counter-example (e.g. a polyhedron with a tunnel, a polyhedron with a central cavity) which leads to a refinement of the conjecture, a new proof, a deeper understanding. This is intentionally reminiscent of Hegel’s dialectic “Thesis plus antithesis yields synthesis”. (But we are simple people, we won’t bother with Hegel.) Can the dialectic reach places that logical refinement cannot? I think so.

5

slide-38
SLIDE 38

The single-place buffer

6

slide-39
SLIDE 39

The single-place buffer

In an aeroplane you have sensors – e.g. temperature, pressure, airflow – and you have monitors – e.g. black-box recorders, autopilots, display screens.

6

slide-40
SLIDE 40

The single-place buffer

In an aeroplane you have sensors – e.g. temperature, pressure, airflow – and you have monitors – e.g. black-box recorders, autopilots, display screens. We want a simple interface between a monitor and a sensor.

6

slide-41
SLIDE 41

The single-place buffer

In an aeroplane you have sensors – e.g. temperature, pressure, airflow – and you have monitors – e.g. black-box recorders, autopilots, display screens. We want a simple interface between a monitor and a sensor. The simplest is a single-place buffer: the sensor writes to it, the monitor reads.

6

slide-42
SLIDE 42

The single-place buffer

In an aeroplane you have sensors – e.g. temperature, pressure, airflow – and you have monitors – e.g. black-box recorders, autopilots, display screens. We want a simple interface between a monitor and a sensor. The simplest is a single-place buffer: the sensor writes to it, the monitor reads. The sensor may write faster than the monitor reads (so there may be missed values).

6

slide-43
SLIDE 43

The single-place buffer

In an aeroplane you have sensors – e.g. temperature, pressure, airflow – and you have monitors – e.g. black-box recorders, autopilots, display screens. We want a simple interface between a monitor and a sensor. The simplest is a single-place buffer: the sensor writes to it, the monitor reads. The sensor may write faster than the monitor reads (so there may be missed values). The monitor may read faster than the sensor writes (so there may be repeated values).

6

slide-44
SLIDE 44

The single-place buffer

In an aeroplane you have sensors – e.g. temperature, pressure, airflow – and you have monitors – e.g. black-box recorders, autopilots, display screens. We want a simple interface between a monitor and a sensor. The simplest is a single-place buffer: the sensor writes to it, the monitor reads. The sensor may write faster than the monitor reads (so there may be missed values). The monitor may read faster than the sensor writes (so there may be repeated values). But the monitor must always get complete values, not half-written values, not half of one value and half of the next.

6

slide-45
SLIDE 45

How it can go wrong

7

slide-46
SLIDE 46

How it can go wrong

The sensor is a clock, displaying minutes m and seconds s in two shared integer variables.

7

slide-47
SLIDE 47

How it can go wrong

The sensor is a clock, displaying minutes m and seconds s in two shared integer variables. Each second it does if s = 59 then s := 0; m := m + 1 else s := s + 1 fi

7

slide-48
SLIDE 48

How it can go wrong

The sensor is a clock, displaying minutes m and seconds s in two shared integer variables. Each second it does if s = 59 then s := 0; m := m + 1 else s := s + 1 fi Between s := 0 and m := m + 1 the clock is slow (by 60 seconds). If the monitor reads at that instant it sees a half-written value.

7

slide-49
SLIDE 49

How it can go wrong

The sensor is a clock, displaying minutes m and seconds s in two shared integer variables. Each second it does if s = 59 then s := 0; m := m + 1 else s := s + 1 fi Between s := 0 and m := m + 1 the clock is slow (by 60 seconds). If the monitor reads at that instant it sees a half-written value. If the sensor does m := m + 1; s := 0 then the clock can be seen as fast (by 59 seconds).

7

slide-50
SLIDE 50

Dijkstra’s solution: atomic accesses

8

slide-51
SLIDE 51

Dijkstra’s solution: atomic accesses

local b = null in       . . . write(w) ˆ = b := w . . . read( ) ˆ = local y in y := b; return y ni      

8

slide-52
SLIDE 52

Dijkstra’s solution: atomic accesses

local b = null in       . . . write(w) ˆ = b := w . . . read( ) ˆ = local y in y := b; return y ni       Atomicity means indivisibility. Interleaving will do.

8

slide-53
SLIDE 53

Dijkstra’s solution: atomic accesses

local b = null in       . . . write(w) ˆ = b := w . . . read( ) ˆ = local y in y := b; return y ni       Atomicity means indivisibility. Interleaving will do. Atomicity can be done with ‘semaphores’ as on the railways (block signalling).

8

slide-54
SLIDE 54

Dijkstra’s solution: atomic accesses

local b = null, ws = .null, rs = in         . . . write(w) ˆ = b := w; ws := ws.w . . . read( ) ˆ = local y in y := b; rs := rs.y; return y ni         Atomicity means indivisibility. Interleaving will do. Atomicity can be done with ‘semaphores’ as on the railways (block signalling). For specification purposes I add a couple of auxiliary variables ws and rs.

8

slide-55
SLIDE 55

Dijkstra’s solution: atomic accesses

local b = null, ws = .null, rs = in         . . . write(w) ˆ = b := w; ws := ws.w . . . read( ) ˆ = local y in y := b; rs := rs.y; return y ni         Atomicity means indivisibility. Interleaving will do. Atomicity can be done with ‘semaphores’ as on the railways (block signalling). For specification purposes I add a couple of auxiliary variables ws and rs. The specification is that the read sequence destuttered is a subsequence of ws and the last element of ws is always in b. ⌊rs⌋ ws ∧ wsΩ = b.

8

slide-56
SLIDE 56

Dijkstra’s solution: atomic accesses

local b = null, ws = .null, rs = in         . . . write(w) ˆ = b := w; ws := ws.w . . . read( ) ˆ = local y in y := b; rs := rs.y; return y ni         Atomicity means indivisibility. Interleaving will do. Atomicity can be done with ‘semaphores’ as on the railways (block signalling). For specification purposes I add a couple of auxiliary variables ws and rs. The specification is that the read sequence destuttered is a subsequence of ws and the last element of ws is always in b. ⌊rs⌋ ws ∧ wsΩ = b. But atomicity means waiting, and waiting isn’t simple or even certain.

8

slide-57
SLIDE 57

Cut down waiting with a two-slot buffer

9

slide-58
SLIDE 58

Cut down waiting with a two-slot buffer

local b = null, ws = .null, rs = in         . . . write(w) ˆ = b := w; ws := ws.w . . . read( ) ˆ = local y in y := b; rs := rs.y; return y ni         local c[2] = (null, null), l = 0, ws = .null, rs = in         . . . write(w) ˆ = c[!l] := w; l := !l; ws := ws.w

  • . . .

read( ) ˆ = local y in y := c[l]; rs := rs.y; return y ni        

9

slide-59
SLIDE 59

Cut down waiting with a two-slot buffer

local b = null, ws = .null, rs = in         . . . write(w) ˆ = b := w; ws := ws.w . . . read( ) ˆ = local y in y := b; rs := rs.y; return y ni         local c[2] = (null, null), l = 0, ws = .null, rs = in         . . . write(w) ˆ = c[!l] := w; l := !l; ws := ws.w

  • . . .

read( ) ˆ = local y in y := c[l]; rs := rs.y; return y ni         Data refinement!

9

slide-60
SLIDE 60

Cut down waiting with a two-slot buffer

local b = null, ws = .null, rs = in         . . . write(w) ˆ = b := w; ws := ws.w . . . read( ) ˆ = local y in y := b; rs := rs.y; return y ni         local c[2] = (null, null), l = 0, ws = .null, rs = in         . . . write(w) ˆ = c[!l] := w; l := !l; ws := ws.w

  • . . .

read( ) ˆ = local y in y := c[l]; rs := rs.y; return y ni         Data refinement! Now ⌊rs⌋ ws ∧ wsΩ = c[l].

9

slide-61
SLIDE 61

Simplify atomicity in the writer

local c[2] = (null, null), l = 0, ws = .null, rs = in         . . . write(w) ˆ = c[!l] := w; l := !l; ws := ws.w

  • . . .

read( ) ˆ = local y in y := c[l]; rs := rs.y; return y ni        

10

slide-62
SLIDE 62

Simplify atomicity in the writer

local c[2] = (null, null), l = 0, ws = .null, rs = in         . . . write(w) ˆ = c[!l] := w; l := !l; ws := ws.w

  • . . .

read( ) ˆ = local y in y := c[l]; rs := rs.y; return y ni         local c[2] = (null, null), l = 0, ws = .null, rs = in           . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w
  • ni

. . . read( ) ˆ = local y in y := c[l]; rs := rs.y; return y ni          

10

slide-63
SLIDE 63

Simplify atomicity in the writer

local c[2] = (null, null), l = 0, ws = .null, rs = in         . . . write(w) ˆ = c[!l] := w; l := !l; ws := ws.w

  • . . .

read( ) ˆ = local y in y := c[l]; rs := rs.y; return y ni         local c[2] = (null, null), l = 0, ws = .null, rs = in           . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w
  • ni

. . . read( ) ˆ = local y in y := c[l]; rs := rs.y; return y ni          

Stores naturally serialise small reads and writes.

10

slide-64
SLIDE 64

Simplify atomicity in the writer

local c[2] = (null, null), l = 0, ws = .null, rs = in         . . . write(w) ˆ = c[!l] := w; l := !l; ws := ws.w

  • . . .

read( ) ˆ = local y in y := c[l]; rs := rs.y; return y ni         local c[2] = (null, null), l = 0, ws = .null, rs = in           . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w
  • ni

. . . read( ) ˆ = local y in y := c[l]; rs := rs.y; return y ni          

Stores naturally serialise small reads and writes. Still ⌊rs⌋ ws ∧ wsΩ = c[l].

10

slide-65
SLIDE 65

(Can’t) simplify atomicity in the reader

11

slide-66
SLIDE 66

(Can’t) simplify atomicity in the reader

local c[2] = (null, null), l = 0, ws = .null, rs = in           . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w
  • ni

. . . read( ) ˆ = local y in y := c[l]; rs := rs.y; return y ni          

11

slide-67
SLIDE 67

(Can’t) simplify atomicity in the reader

local c[2] = (null, null), l = 0, ws = .null, rs = in           . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w
  • ni

. . . read( ) ˆ = local y in y := c[l]; rs := rs.y; return y ni           local c[2] = (null, null), l = 0, ws = .null, rs = in           . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w
  • ni

. . . read( ) ˆ = local y, rt in

  • rt := l

; y := c[rt]; rs := rs.y; return y ni          

11

slide-68
SLIDE 68

(Can’t) simplify atomicity in the reader

local c[2] = (null, null), l = 0, ws = .null, rs = in           . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w
  • ni

. . . read( ) ˆ = local y in y := c[l]; rs := rs.y; return y ni           local c[2] = (null, null), l = 0, ws = .null, rs = in           . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w
  • ni

. . . read( ) ˆ = local y, rt in

  • rt := l

; y := c[rt]; rs := rs.y; return y ni          

Looks plausible, but it’s broken.

11

slide-69
SLIDE 69

(Can’t) simplify atomicity in the reader

local c[2] = (null, null), l = 0, ws = .null, rs = in           . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w
  • ni

. . . read( ) ˆ = local y in y := c[l]; rs := rs.y; return y ni           local c[2] = (null, null), l = 0, ws = .null, rs = in           . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w
  • ni

. . . read( ) ˆ = local y, rt in

  • rt := l

; y := c[rt]; rs := rs.y; return y ni          

Looks plausible, but it’s broken. Still wsΩ = c[l]

11

slide-70
SLIDE 70

(Can’t) simplify atomicity in the reader

local c[2] = (null, null), l = 0, ws = .null, rs = in           . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w
  • ni

. . . read( ) ˆ = local y in y := c[l]; rs := rs.y; return y ni           local c[2] = (null, null), l = 0, ws = .null, rs = in           . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w
  • ni

. . . read( ) ˆ = local y, rt in

  • rt := l

; y := c[rt]; rs := rs.y; return y ni          

Looks plausible, but it’s broken. Still wsΩ = c[l], but no longer ⌊rs⌋ ws.

11

slide-71
SLIDE 71

What goes wrong?

local c[2] = (null, null), l = 0, ws = .null, rs = in             . . . write(w) ˆ = local wt in

  • wt := !l

;

w1

c[wt] := w;

w2

  • l := wt; ws := ws.w
  • w3

ni . . . read( ) ˆ = local y, rt in

  • rt := l

;

r1

y := c[rt]; rs := rs.y;

r2

return y ni            

12

slide-72
SLIDE 72

What goes wrong?

local c[2] = (null, null), l = 0, ws = .null, rs = in             . . . write(w) ˆ = local wt in

  • wt := !l

;

w1

c[wt] := w;

w2

  • l := wt; ws := ws.w
  • w3

ni . . . read( ) ˆ = local y, rt in

  • rt := l

;

r1

y := c[rt]; rs := rs.y;

r2

return y ni            

From the point of view of the reader, after rt := l , the writer behaves like this finite-state machine:

l = rt start l = rt start l = !rt l = !rt

w2 w3 w2 w3

12

slide-73
SLIDE 73

What goes wrong?

local c[2] = (null, null), l = 0, ws = .null, rs = in             . . . write(w) ˆ = local wt in

  • wt := !l

;

w1

c[wt] := w;

w2

  • l := wt; ws := ws.w
  • w3

ni . . . read( ) ˆ = local y, rt in

  • rt := l

;

r1

y := c[rt]; rs := rs.y;

r2

return y ni            

From the point of view of the reader, after rt := l , the writer behaves like this finite-state machine:

l = rt start l = rt start l = !rt l = !rt c[!rt] := w l := !rt c[rt] := w l := rt

12

slide-74
SLIDE 74

What goes wrong?

local c[2] = (null, null), l = 0, ws = .null, rs = in             . . . write(w) ˆ = local wt in

  • wt := !l

;

w1

c[wt] := w;

w2

  • l := wt; ws := ws.w
  • w3

ni . . . read( ) ˆ = local y, rt in

  • rt := l

;

r1

y := c[rt]; rs := rs.y;

r2

return y ni            

From the point of view of the reader, after rt := l , the writer behaves like this finite-state machine:

l = rt start l = rt start l = !rt l = !rt c[!rt] := w l := !rt c[rt] := w l := rt

If the reader comes in at box 1 or 2 and reads at box 4, it will see the second value written

12

slide-75
SLIDE 75

What goes wrong?

local c[2] = (null, null), l = 0, ws = .null, rs = in             . . . write(w) ˆ = local wt in

  • wt := !l

;

w1

c[wt] := w;

w2

  • l := wt; ws := ws.w
  • w3

ni . . . read( ) ˆ = local y, rt in

  • rt := l

;

r1

y := c[rt]; rs := rs.y;

r2

return y ni            

From the point of view of the reader, after rt := l , the writer behaves like this finite-state machine:

l = rt start l = rt start l = !rt l = !rt c[!rt] := w l := !rt c[rt] := w l := rt

If the reader comes in at box 1 or 2 and reads at box 4, it will see the second value written; if it then comes back quickly, it can see the first thing written!!

12

slide-76
SLIDE 76

What goes wrong?

local c[2] = (null, null), l = 0, ws = .null, rs = in             . . . write(w) ˆ = local wt in

  • wt := !l

;

w1

c[wt] := w;

w2

  • l := wt; ws := ws.w
  • w3

ni . . . read( ) ˆ = local y, rt in

  • rt := l

;

r1

y := c[rt]; rs := rs.y;

r2

return y ni            

From the point of view of the reader, after rt := l , the writer behaves like this finite-state machine:

l = rt start l = rt start l = !rt l = !rt c[!rt] := w l := !rt c[rt] := w l := rt

If the reader comes in at box 1 or 2 and reads at box 4, it will see the second value written; if it then comes back quickly, it can see the first thing written!! Also note c[rt] := w y := c[rt] .

12

slide-77
SLIDE 77

Can we repair it (1)?

All our problems (ordering, collisions) are caused by the third action

l = rt start l = rt start l = !rt l = !rt c[!rt] := w l := !rt c[rt] := w l := rt

13

slide-78
SLIDE 78

Can we repair it (1)?

All our problems (ordering, collisions) are caused by the third action

l = rt start l = rt start l = !rt l = !rt c[!rt] := w l := !rt c[rt] := w l := rt

Can we detect when that action happens?

local c[2] = (null, null), l = 0, ws = .null, rs = in           . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w
  • ni

. . . read( ) ˆ = local y, rt in

  • rt := l

; y := c[rt]; rs := rs.y; return y ni          

13

slide-79
SLIDE 79

Can we repair it (1)?

All our problems (ordering, collisions) are caused by the third action

l = rt start l = rt start l = !rt l = !rt c[!rt] := w l := !rt c[rt] := w l := rt

Can we detect when that action happens?

local c[2] = (null, null), l = 0, ws = .null, rs = in           . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w
  • ni

. . . read( ) ˆ = local y, rt in

  • rt := l

; y := c[rt]; rs := rs.y; return y ni          

No, because the writer can’t tell the difference between the first and the third actions.

13

slide-80
SLIDE 80

Can we repair it (2)?

All our problems (ordering, collisions) are caused by the third action

l = rt start l = rt start l = !rt l = !rt c[!rt] := w l := !rt c[rt] := w l := rt

Can we detect when it might happen?

local c[2] = (null, null), l = 0, ws = .null, rs = in           . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w
  • ni

. . . read( ) ˆ = local y, rt in

  • rt := l

; y := c[rt]; rs := rs.y; return y ni          

14

slide-81
SLIDE 81

Can we repair it (2)?

All our problems (ordering, collisions) are caused by the third action

l = rt start l = rt start l = !rt l = !rt c[!rt] := w l := !rt c[rt] := w l := rt

Can we detect when it might happen?

local c[2] = (null, null), l = 0, ws = .null, rs = in           . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w
  • ni

. . . read( ) ˆ = local y, rt in

  • rt := l

; y := c[rt]; rs := rs.y; return y ni          

Yes: it becomes possible after the second action.

14

slide-82
SLIDE 82

A repair (back to correctness)

The writer signals when disaster becomes possible; the reader incorporates the signal in its answer.

local c[2] = (null, null), l = 0, ok, ws = .null, rs = in               . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w

;

  • k := false

; ni . . . read( ) ˆ = local y, rt in

  • k := true

;

  • rt := l

; y := c[rt];

  • rt = ok; rs := rs.(rt, y)

; return (rt, y) ni              

15

slide-83
SLIDE 83

A repair (back to correctness)

The writer signals when disaster becomes possible; the reader incorporates the signal in its answer.

local c[2] = (null, null), l = 0, ok, ws = .null, rs = in               . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w

;

  • k := false

; ni . . . read( ) ˆ = local y, rt in

  • k := true

;

  • rt := l

; y := c[rt];

  • rt = ok; rs := rs.(rt, y)

; return (rt, y) ni              

– and then we notice that we don’t need atomic buffer accesses any more.

15

slide-84
SLIDE 84

A repair (back to correctness)

The writer signals when disaster becomes possible; the reader incorporates the signal in its answer.

local c[2] = (null, null), l = 0, ok, ws = .null, rs = in               . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w

;

  • k := false

; ni . . . read( ) ˆ = local y, rt in

  • k := true

;

  • rt := l

; y := c[rt];

  • rt = ok; rs := rs.(rt, y)

; return (rt, y) ni              

– and then we notice that we don’t need atomic buffer accesses any more.

15

slide-85
SLIDE 85

A repair (back to correctness)

The writer signals when disaster becomes possible; the reader incorporates the signal in its answer.

local c[2] = (null, null), l = 0, ok, ws = .null, rs = in               . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w

;

  • k := false

; ni . . . read( ) ˆ = local y, rt in

  • k := true

;

  • rt := l

; y := c[rt];

  • rt = ok; rs := rs.(rt, y)

; return (rt, y) ni              

– and then we notice that we don’t need atomic buffer accesses any more. If rs is rs with the (false, ) results taken out and the true labels discarded, then we have ⌊ rs⌋ ws ∧ wsΩ = c[l].

15

slide-86
SLIDE 86

Summary so far, and a criticism

16

slide-87
SLIDE 87

Summary so far, and a criticism

We have gone from an atomic single-slot buffer (not wait-free)

16

slide-88
SLIDE 88

Summary so far, and a criticism

We have gone from an atomic single-slot buffer (not wait-free) to an atomic double-slot buffer (ditto)

16

slide-89
SLIDE 89

Summary so far, and a criticism

We have gone from an atomic single-slot buffer (not wait-free) to an atomic double-slot buffer (ditto) to a faulty not so completely atomic double-slot buffer (still not wait-free)

16

slide-90
SLIDE 90

Summary so far, and a criticism

We have gone from an atomic single-slot buffer (not wait-free) to an atomic double-slot buffer (ditto) to a faulty not so completely atomic double-slot buffer (still not wait-free) to a working wait-free non-atomic double-slot buffer that tells us when it’s succeeded.

16

slide-91
SLIDE 91

Summary so far, and a criticism

We have gone from an atomic single-slot buffer (not wait-free) to an atomic double-slot buffer (ditto) to a faulty not so completely atomic double-slot buffer (still not wait-free) to a working wait-free non-atomic double-slot buffer that tells us when it’s succeeded. We certainly haven’t proceeded by “a sequence of true understatements” (Lakatos);

16

slide-92
SLIDE 92

Summary so far, and a criticism

We have gone from an atomic single-slot buffer (not wait-free) to an atomic double-slot buffer (ditto) to a faulty not so completely atomic double-slot buffer (still not wait-free) to a working wait-free non-atomic double-slot buffer that tells us when it’s succeeded. We certainly haven’t proceeded by “a sequence of true understatements” (Lakatos); we have made at least one “false

  • verstatement”;

16

slide-93
SLIDE 93

Summary so far, and a criticism

We have gone from an atomic single-slot buffer (not wait-free) to an atomic double-slot buffer (ditto) to a faulty not so completely atomic double-slot buffer (still not wait-free) to a working wait-free non-atomic double-slot buffer that tells us when it’s succeeded. We certainly haven’t proceeded by “a sequence of true understatements” (Lakatos); we have made at least one “false

  • verstatement”; perhaps we have made a step of “exception barring”.

16

slide-94
SLIDE 94

Summary so far, and a criticism

We have gone from an atomic single-slot buffer (not wait-free) to an atomic double-slot buffer (ditto) to a faulty not so completely atomic double-slot buffer (still not wait-free) to a working wait-free non-atomic double-slot buffer that tells us when it’s succeeded. We certainly haven’t proceeded by “a sequence of true understatements” (Lakatos); we have made at least one “false

  • verstatement”; perhaps we have made a step of “exception barring”.

I see the step that includes the ‘ok’ variable as an example of thesis (program) plus antithesis (counter-example) yielding synthesis (repaired program).

16

slide-95
SLIDE 95

Summary so far, and a criticism

We have gone from an atomic single-slot buffer (not wait-free) to an atomic double-slot buffer (ditto) to a faulty not so completely atomic double-slot buffer (still not wait-free) to a working wait-free non-atomic double-slot buffer that tells us when it’s succeeded. We certainly haven’t proceeded by “a sequence of true understatements” (Lakatos); we have made at least one “false

  • verstatement”; perhaps we have made a step of “exception barring”.

I see the step that includes the ‘ok’ variable as an example of thesis (program) plus antithesis (counter-example) yielding synthesis (repaired program). But what use is the pair (false, something)? What can a user do but ignore something and try to read again?

16

slide-96
SLIDE 96

A more honest repair

local c[2] = (null, null), l = 0, ok, ws = .null, rs = in                      . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w

;

  • k := false

; ni . . . read( ) ˆ = local y, rt in do

  • k := true

;

  • rt := l

; y := c[rt];

  • rt = ok
  • until rt;
  • rs := rs.y

; return y ni                     

17

slide-97
SLIDE 97

A more honest repair

local c[2] = (null, null), l = 0, ok, ws = .null, rs = in                      . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w

;

  • k := false

; ni . . . read( ) ˆ = local y, rt in do

  • k := true

;

  • rt := l

; y := c[rt];

  • rt = ok
  • until rt;
  • rs := rs.y

; return y ni                     

Obviously not wait-free.

17

slide-98
SLIDE 98

A more honest repair

local c[2] = (null, null), l = 0, ok, ws = .null, rs = in                      . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w

;

  • k := false

; ni . . . read( ) ˆ = local y, rt in do

  • k := true

;

  • rt := l

; y := c[rt];

  • rt = ok
  • until rt;
  • rs := rs.y

; return y ni                     

Obviously not wait-free. But otherwise repaired.

17

slide-99
SLIDE 99

A more honest repair

local c[2] = (null, null), l = 0, ok, ws = .null, rs = in                      . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w

;

  • k := false

; ni . . . read( ) ˆ = local y, rt in do

  • k := true

;

  • rt := l

; y := c[rt];

  • rt = ok
  • until rt;
  • rs := rs.y

; return y ni                     

Obviously not wait-free. But otherwise repaired. Finite-state machine now

  • k ∧ l = L

start

  • k ∧ l = !L
  • k ∧ l = L

start ¬ok c[!L] := w l := !L

  • k := false
  • k := false

c[!L] := w c[L] := w l := !l

17

slide-100
SLIDE 100

Try three slots

local c[2] = (null, null), d = null, l = 0, ok, ws = .null, rs = in                    . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w

; d := w;

  • k := false

; ni . . . read( ) ˆ = local y, rt in

  • k := true

;

  • rt := l

; y := c[rt];

  • rt = ok

; if ¬rt then y := d else skip fi;

  • rs := rs.y

; return y ni                   

The writer can first write in a side-channel, then signal that mayhem

  • approaches. If it gets the signal, the reader uses the side-channel.

18

slide-101
SLIDE 101

Try three slots

local c[2] = (null, null), d = null, l = 0, ok, ws = .null, rs = in                    . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w

; d := w;

  • k := false

; ni . . . read( ) ˆ = local y, rt in

  • k := true

;

  • rt := l

; y := c[rt];

  • rt = ok

; if ¬rt then y := d else skip fi;

  • rs := rs.y

; return y ni                   

The writer can first write in a side-channel, then signal that mayhem

  • approaches. If it gets the signal, the reader uses the side-channel.

Perhaps this will work ...

18

slide-102
SLIDE 102

Try three slots

local c[2] = (null, null), d = null, l = 0, ok, ws = .null, rs = in                    . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w

;

  • wt := ok

; if wt then d := w;

  • k := false
  • else skip fi

ni . . . read( ) ˆ = local y, rt in

  • k := true

;

  • rt := l

; y := c[rt];

  • rt = ok

; if ¬rt then y := d else skip fi;

  • rs := rs.y

; return y ni                   

The writer can first write in a side-channel, then signal that mayhem

  • approaches. If it gets the signal, the reader uses the side-channel.

Perhaps this will work ... but it’s more likely to work if the writer only writes when the reader is asking for it.

18

slide-103
SLIDE 103

Try three slots

local c[2] = (null, null), d = null, l = 0, ok, ws = .null, rs = in                    . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w

;

  • wt := ok

; if wt then d := w;

  • k := false
  • else skip fi

ni . . . read( ) ˆ = local y, rt in

  • k := true

;

  • rt := l

; y := c[rt];

  • rt = ok

; if ¬rt then y := d else skip fi;

  • rs := rs.y

; return y ni                   

The writer can first write in a side-channel, then signal that mayhem

  • approaches. If it gets the signal, the reader uses the side-channel.

Perhaps this will work ... but it’s more likely to work if the writer only writes when the reader is asking for it. And then I notice that they alternate, and I can use non-atomic read and write.

18

slide-104
SLIDE 104

Try three slots

local c[2] = (null, null), d = null, l = 0, ok, ws = .null, rs = in                    . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w

;

  • wt := ok

; if wt then d := w;

  • k := false
  • else skip fi

ni . . . read( ) ˆ = local y, rt in

  • k := true

;

  • rt := l

; y := c[rt];

  • rt = ok

; if ¬rt then y := d else skip fi;

  • rs := rs.y

; return y ni                   

The writer can first write in a side-channel, then signal that mayhem

  • approaches. If it gets the signal, the reader uses the side-channel.

Perhaps this will work ... but it’s more likely to work if the writer only writes when the reader is asking for it. And then I notice that they alternate, and I can use non-atomic read and write. This is Harris’s algorithm, rationally developed.

18

slide-105
SLIDE 105

Recapitulation

local b = null in         . . . write(w) ˆ = b := w . . . read( ) ˆ = local y in y := b; return y ni        

– Dijkstra’s single-place buffer.

19

slide-106
SLIDE 106

Recapitulation

local b = null, ws = .null, rs = in         . . . write(w) ˆ = b := w; ws := ws.w . . . read( ) ˆ = local y in y := b; rs := rs.y; return y ni        

– with auxiliary ws and rs to show ⌊rs⌋ ws ∧ wsΩ = b.

19

slide-107
SLIDE 107

Recapitulation

local c[2] = (null, null), l = 0, ws = .null, rs = in         . . . write(w) ˆ = c[!l] := w; l := !l; ws := ws.w

  • . . .

read( ) ˆ = local y in y := c[l]; rs := rs.y; return y ni        

– data refinement to two slots and ⌊rs⌋ ws ∧ wsΩ = c[l].

19

slide-108
SLIDE 108

Recapitulation

local c[2] = (null, null), l = 0, ws = .null, rs = in           . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w
  • ni

. . . read( ) ˆ = local y in y := c[l]; rs := rs.y; return y ni          

– atomicity refinement in the writer; still ⌊rs⌋ ws ∧ wsΩ = c[l].

19

slide-109
SLIDE 109

Recapitulation

local c[2] = (null, null), l = 0, ws = .null, rs = in           . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w
  • ni

. . . read( ) ˆ = local y, rt in

  • rt := l

; y := c[rt]; rs := rs.y; return y ni          

– atomicity refinement in the reader; now ⌊rs⌋ ws ∧ wsΩ = c[l].

19

slide-110
SLIDE 110

Recapitulation

local c[2] = (null, null), l = 0, ok, ws = .null, rs = in               . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w

;

  • k := false

; ni . . . read( ) ˆ = local y, rt in

  • k := true

;

  • rt := l

; y := c[rt];

  • rt = ok; rs := rs.(rt, y)

; return (rt, y) ni              

– exception barring; now ⌊ rs⌋ ws ∧ wsΩ = c[l].

19

slide-111
SLIDE 111

Recapitulation

local c[2] = (null, null), l = 0, ok, ws = .null, rs = in                      . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w

;

  • k := false

; ni . . . read( ) ˆ = local y, rt in do

  • k := true

;

  • rt := l

; y := c[rt];

  • rt = ok
  • until rt;
  • rs := rs.y

; return y ni                     

– more honest exception barring; once again ⌊rs⌋ ws ∧ wsΩ = c[l], but no longer wait-free.

19

slide-112
SLIDE 112

Recapitulation

local c[2] = (null, null), d = null, l = 0, ok, ws = .null, rs = in                    . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w

;

  • wt := ok

; if wt then d := w;

  • k := false
  • else skip fi

ni . . . read( ) ˆ = local y, rt in

  • k := true

;

  • rt := l

; y := c[rt];

  • rt = ok

; if ¬rt then y := d else skip fi;

  • rs := rs.y

; return y ni                   

– three slots; ⌊rs⌋ ws ∧ wsΩ = c[l]; wait-free.

19

slide-113
SLIDE 113

Recapitulation

local c[2] = (null, null), d = null, l = 0, ok, ws = .null, rs = in                    . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w

;

  • wt := ok

; if wt then d := w;

  • k := false
  • else skip fi

ni . . . read( ) ˆ = local y, rt in

  • k := true

;

  • rt := l

; y := c[rt];

  • rt = ok

; if ¬rt then y := d else skip fi;

  • rs := rs.y

; return y ni                   

– three slots; ⌊rs⌋ ws ∧ wsΩ = c[l]; wait-free. Proof available on application.

19

slide-114
SLIDE 114

A programmer’s instinct demands . . .

local c[2] = (null, null), d = null, l = 0, ok, ws = .null, rs = in                      . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w

;

  • wt := ok

; if wt then d := w;

  • k := false
  • else skip fi

ni . . . read( ) ˆ = local y, rt in

  • k := true

;

  • rt := l

; y := c[rt];

  • rt = ok

; if ¬rt then y := d else

  • k := false

fi;

  • rs := rs.y

; return y ni                     

– the reader tells the writer when there’s no need to use the side channel.

20

slide-115
SLIDE 115

A programmer’s instinct demands . . .

local c[2] = (null, null), d = null, l = 0, ok, ws = .null, rs = in                      . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w

;

  • wt := ok

; if wt then d := w;

  • k := false
  • else skip fi

ni . . . read( ) ˆ = local y, rt in

  • k := true

;

  • rt := l

; y := c[rt];

  • rt = ok

; if ¬rt then y := d else

  • k := false

fi;

  • rs := rs.y

; return y ni                     

– the reader tells the writer when there’s no need to use the side

  • channel. It’s no longer true that the writer only writes when ok.

20

slide-116
SLIDE 116

A programmer’s instinct demands . . .

local c[2] = (null, null), d = null, l = 0, ok, ws = .null, rs = in                      . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w

;

  • wt := ok

; if wt then d := w;

  • k := false
  • else skip fi

ni . . . read( ) ˆ = local y, rt in

  • k := true

;

  • rt := l

; y := c[rt];

  • rt = ok

; if ¬rt then y := d else

  • k := false

fi;

  • rs := rs.y

; return y ni                     

– the reader tells the writer when there’s no need to use the side

  • channel. It’s no longer true that the writer only writes when ok. But

from the point of view of the reader, nothing has changed!

20

slide-117
SLIDE 117

A programmer’s instinct demands . . .

local c[2] = (null, null), d = null, l = 0, ok, ws = .null, rs = in                      . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w

;

  • wt := ok

; if wt then d := w;

  • k := false
  • else skip fi

ni . . . read( ) ˆ = local y, rt in

  • k := true

;

  • rt := l

; y := c[rt];

  • rt = ok

; if ¬rt then y := d else

  • k := false

fi;

  • rs := rs.y

; return y ni                     

– the reader tells the writer when there’s no need to use the side

  • channel. It’s no longer true that the writer only writes when ok. But

from the point of view of the reader, nothing has changed! In fact the writer writes when ok or the reader is asleep.

20

slide-118
SLIDE 118

A programmer’s instinct demands . . .

local c[2] = (null, null), d = null, l = 0, ok, ws = .null, rs = in                      . . . write(w) ˆ = local wt in

  • wt := !l

; c[wt] := w;

  • l := wt; ws := ws.w

;

  • wt := ok

; if wt then d := w;

  • k := false
  • else skip fi

ni . . . read( ) ˆ = local y, rt in

  • k := true

;

  • rt := l

; y := c[rt];

  • rt = ok

; if ¬rt then y := d else

  • k := false

fi;

  • rs := rs.y

; return y ni                     

– the reader tells the writer when there’s no need to use the side

  • channel. It’s no longer true that the writer only writes when ok. But

from the point of view of the reader, nothing has changed! In fact the writer writes when ok or the reader is asleep. Easy to fix with another auxiliary variable, proof available on request.

20

slide-119
SLIDE 119

Pipelines, caches, weak memory models

In my argument I relied on sequencing of actions and on serialisation

  • f (small) memory accesses.

21

slide-120
SLIDE 120

Pipelines, caches, weak memory models

In my argument I relied on sequencing of actions and on serialisation

  • f (small) memory accesses.

In modern machines these assumptions don’t hold.

21

slide-121
SLIDE 121

Pipelines, caches, weak memory models

In my argument I relied on sequencing of actions and on serialisation

  • f (small) memory accesses.

In modern machines these assumptions don’t hold. Pipelines reorder instructions if there isn’t an obvious dependency – e.g. in c[wt] := w; l := wt or in d := w; ok := false.

21

slide-122
SLIDE 122

Pipelines, caches, weak memory models

In my argument I relied on sequencing of actions and on serialisation

  • f (small) memory accesses.

In modern machines these assumptions don’t hold. Pipelines reorder instructions if there isn’t an obvious dependency – e.g. in c[wt] := w; l := wt or in d := w; ok := false. Caches delay interaction with the store.

21

slide-123
SLIDE 123

Pipelines, caches, weak memory models

In my argument I relied on sequencing of actions and on serialisation

  • f (small) memory accesses.

In modern machines these assumptions don’t hold. Pipelines reorder instructions if there isn’t an obvious dependency – e.g. in c[wt] := w; l := wt or in d := w; ok := false. Caches delay interaction with the store. Viewed from another machine, store actions are reordered.

21

slide-124
SLIDE 124

Pipelines, caches, weak memory models

In my argument I relied on sequencing of actions and on serialisation

  • f (small) memory accesses.

In modern machines these assumptions don’t hold. Pipelines reorder instructions if there isn’t an obvious dependency – e.g. in c[wt] := w; l := wt or in d := w; ok := false. Caches delay interaction with the store. Viewed from another machine, store actions are reordered. So-called “weak memory models” are difficult to understand.

21

slide-125
SLIDE 125

Pipelines, caches, weak memory models

In my argument I relied on sequencing of actions and on serialisation

  • f (small) memory accesses.

In modern machines these assumptions don’t hold. Pipelines reorder instructions if there isn’t an obvious dependency – e.g. in c[wt] := w; l := wt or in d := w; ok := false. Caches delay interaction with the store. Viewed from another machine, store actions are reordered. So-called “weak memory models” are difficult to understand. That is, to understand well enough to reason about programs running under them.

21

slide-126
SLIDE 126

Pipelines, caches, weak memory models

In my argument I relied on sequencing of actions and on serialisation

  • f (small) memory accesses.

In modern machines these assumptions don’t hold. Pipelines reorder instructions if there isn’t an obvious dependency – e.g. in c[wt] := w; l := wt or in d := w; ok := false. Caches delay interaction with the store. Viewed from another machine, store actions are reordered. So-called “weak memory models” are difficult to understand. That is, to understand well enough to reason about programs running under them. Maybe an EPSRC project . . .

21

slide-127
SLIDE 127

And finally

Programming is creative, experimental.

22

slide-128
SLIDE 128

And finally

Programming is creative, experimental. That means we will go wrong.

22

slide-129
SLIDE 129

And finally

Programming is creative, experimental. That means we will go wrong. When we go wrong we can fix our program, refine our conjecture, or both.

22

slide-130
SLIDE 130

And finally

Programming is creative, experimental. That means we will go wrong. When we go wrong we can fix our program, refine our conjecture, or both. The mechanisms of refinement can guide us, but we may (I would say must) sometimes make “false overstatements”.

22

slide-131
SLIDE 131

And finally

Programming is creative, experimental. That means we will go wrong. When we go wrong we can fix our program, refine our conjecture, or both. The mechanisms of refinement can guide us, but we may (I would say must) sometimes make “false overstatements”. Lakatos’s dialectic lives!

22