Sequential and Parallel Abstract Machines for Optimal Reduction - - PowerPoint PPT Presentation

sequential and parallel abstract machines for optimal
SMART_READER_LITE
LIVE PREVIEW

Sequential and Parallel Abstract Machines for Optimal Reduction - - PowerPoint PPT Presentation

Sequential and Parallel Abstract Machines for Optimal Reduction Marco Pedicini (Roma Tre University) in collaboration with Mario Piazza (Univ. of Chieti-Pescara) Proofs and Types 25 years later Universit Roma Tre, March 26 28, 2015


slide-1
SLIDE 1

Sequential and Parallel Abstract Machines for Optimal Reduction

Marco Pedicini (Roma Tre University) in collaboration with Mario Piazza (Univ. of Chieti-Pescara)

Proofs and Types 25 years later

Università Roma Tre, March 26 – 28, 2015

slide-2
SLIDE 2

Lambda Calculus

Alonzo Church in 1930’s introduced lambda-calculus as an alternative (with respect to recursive functions) model of computation.

slide-3
SLIDE 3

Lambda Calculus

Alonzo Church in 1930’s introduced lambda-calculus as an alternative (with respect to recursive functions) model of computation.

  • Lambda Terms
  • Variables: x, y, . . . (discrete, denumerable-infinite set)
slide-4
SLIDE 4

Lambda Calculus

Alonzo Church in 1930’s introduced lambda-calculus as an alternative (with respect to recursive functions) model of computation.

  • Lambda Terms
  • Variables: x, y, . . . (discrete, denumerable-infinite set)
  • Application: if T and U are lambda-terms then

(T )U is a lambda-term

slide-5
SLIDE 5

Lambda Calculus

Alonzo Church in 1930’s introduced lambda-calculus as an alternative (with respect to recursive functions) model of computation.

  • Lambda Terms
  • Variables: x, y, . . . (discrete, denumerable-infinite set)
  • Application: if T and U are lambda-terms then

(T )U is a lambda-term

  • Abstraction: if x ia a variable and U is a lambda term then

λx.U is a lambda term

slide-6
SLIDE 6

Lambda Calculus

Alonzo Church in 1930’s introduced lambda-calculus as an alternative (with respect to recursive functions) model of computation.

  • Lambda Terms
  • Variables: x, y, . . . (discrete, denumerable-infinite set)
  • Application: if T and U are lambda-terms then

(T )U is a lambda-term

  • Abstraction: if x ia a variable and U is a lambda term then

λx.U is a lambda term

  • Term reduction as computing device:
slide-7
SLIDE 7

Lambda Calculus

Alonzo Church in 1930’s introduced lambda-calculus as an alternative (with respect to recursive functions) model of computation.

  • Lambda Terms
  • Variables: x, y, . . . (discrete, denumerable-infinite set)
  • Application: if T and U are lambda-terms then

(T )U is a lambda-term

  • Abstraction: if x ia a variable and U is a lambda term then

λx.U is a lambda term

  • Term reduction as computing device:

(λx.U)V →β U[V /x]

slide-8
SLIDE 8

Turing Completeness

  • Lambda Definability of Recursive Functions: by encoding
  • f integers as lambda-terms;
  • 0 = λf .λx.x

1 = λf .λx.(f )x 2 = λf .λx.(f )(f )x . . . n = λf .λx.(f )nx

slide-9
SLIDE 9

History

  • At the beginning of digital computers in the 1950’s one of

the first language was lisp by Mc Carthy (MIT)

slide-10
SLIDE 10

History

  • At the beginning of digital computers in the 1950’s one of

the first language was lisp by Mc Carthy (MIT)

  • Then in the 1960’s functional programming languages

exploiting formal proofs of correctness were studied: ML, erlang, scheme, clean, caml, …

slide-11
SLIDE 11

History

  • At the beginning of digital computers in the 1950’s one of

the first language was lisp by Mc Carthy (MIT)

  • Then in the 1960’s functional programming languages

exploiting formal proofs of correctness were studied: ML, erlang, scheme, clean, caml, …

  • Nowdays functional languages are enriched with many

special constructs which imperative languages cannot support (i.e. clojure, scala, F#).

slide-12
SLIDE 12

GOI and PELCR

  • Geometry of Interaction is the base of (a familiy of)

semantics for programming languages (game semantics).

  • GOI is (a kind of) operational semantics.
  • GOI realized an algebraic theory for the sharing of

sub-expressions and permitted the development of

  • ptimal lambda calculus reduction and a parallel evaluation

mechanism based on a local and asynchronous calculus. ⊤ λ @ X

Optimal reduction was introduced in J.J. Levy’s PhD Thesis and defined (on sharing graphs) by J. Lamping in 1990.

slide-13
SLIDE 13

TERMS as GRAPHS

We use to interpret a lambda term M as its syntactic graph [M]: [(λx.x)λx.x] =

⊤ λ AX AX @ CUT λ AX

slide-14
SLIDE 14

Reduction Example

⊤ λ @ λ

Syntactic tree of (λxx)λxx (with binders).

slide-15
SLIDE 15

Reduction Example

⊤ λ @ λ

We orient edges in accord to the five types of nodes and we introduce explicit nodes for variables. We also added sharing ope- rators in order to manage du- plications (even if unneces- sary in this example for the linearity of x in λxx).

slide-16
SLIDE 16

Reduction Example

⊤ λ AX AX @ CUT λ AX

We introduce axiom and cut nodes to reconcile edge

  • rientations.
slide-17
SLIDE 17

Reduction Example

⊤ λ AX AX @ CUT λ AX

We show one reduction step (the one corresponding to the beta-rule) the cut-node configuration must be re- moved and replaced by di- rect connections among the neighborhood nodes.

slide-18
SLIDE 18

Reduction Example

⊤ λ AX AX CUT AX CUT

A reduction step may intro- duce new cuts (trivial ones in this case) but it consists es- sentially of the composition

  • f paths in the graph.
slide-19
SLIDE 19

Reduction Example

⊤ λ AX AX CUT AX CUT ⊤ λ AX CUT AX ⊤ λ AX CUT AX CUT CUT AX AX ⊤ λ AX AX CUT CUT AX AX CUT ⊤ λ AX AX CUT CUT AX ⊤ λ AX

slide-20
SLIDE 20

LAMBDA STAR

The so-called Girard dynamic algebra Λ∗ is the so-called GOI monoid,

slide-21
SLIDE 21

LAMBDA STAR

The so-called Girard dynamic algebra Λ∗ is the so-called GOI monoid, i.e., the free monoid

slide-22
SLIDE 22

LAMBDA STAR

The so-called Girard dynamic algebra Λ∗ is the so-called GOI monoid, i.e., the free monoid with a morphism !(.),

slide-23
SLIDE 23

LAMBDA STAR

The so-called Girard dynamic algebra Λ∗ is the so-called GOI monoid, i.e., the free monoid with a morphism !(.), an involution (.)∗

slide-24
SLIDE 24

LAMBDA STAR

The so-called Girard dynamic algebra Λ∗ is the so-called GOI monoid, i.e., the free monoid with a morphism !(.), an involution (.)∗ and a zero,

slide-25
SLIDE 25

LAMBDA STAR

The so-called Girard dynamic algebra Λ∗ is the so-called GOI monoid, i.e., the free monoid with a morphism !(.), an involution (.)∗ and a zero, generated by the following constants:

slide-26
SLIDE 26

LAMBDA STAR

The so-called Girard dynamic algebra Λ∗ is the so-called GOI monoid, i.e., the free monoid with a morphism !(.), an involution (.)∗ and a zero, generated by the following constants: p, q, and a family W = (wi)i of exponential generators

slide-27
SLIDE 27

LAMBDA STAR

The so-called Girard dynamic algebra Λ∗ is the so-called GOI monoid, i.e., the free monoid with a morphism !(.), an involution (.)∗ and a zero, generated by the following constants: p, q, and a family W = (wi)i of exponential generators such that for any u ∈ Λ∗: (annihilation) x ∗y = δxy for x, y = p, q, wi,

slide-28
SLIDE 28

LAMBDA STAR

The so-called Girard dynamic algebra Λ∗ is the so-called GOI monoid, i.e., the free monoid with a morphism !(.), an involution (.)∗ and a zero, generated by the following constants: p, q, and a family W = (wi)i of exponential generators such that for any u ∈ Λ∗: (annihilation) x ∗y = δxy for x, y = p, q, wi, (swapping) !(u)wi = wi!ei (u),

slide-29
SLIDE 29

LAMBDA STAR

The so-called Girard dynamic algebra Λ∗ is the so-called GOI monoid, i.e., the free monoid with a morphism !(.), an involution (.)∗ and a zero, generated by the following constants: p, q, and a family W = (wi)i of exponential generators such that for any u ∈ Λ∗: (annihilation) x ∗y = δxy for x, y = p, q, wi, (swapping) !(u)wi = wi!ei (u), where δxy is the Kronecker operator, ei is an integer associated with wi called the lift of wi, i is called the name of wi and we will often write wi,ei to explicitly note the lift of the generator.

slide-30
SLIDE 30

LAMBDA STAR

The so-called Girard dynamic algebra Λ∗ is the so-called GOI monoid, i.e., the free monoid with a morphism !(.), an involution (.)∗ and a zero, generated by the following constants: p, q, and a family W = (wi)i of exponential generators such that for any u ∈ Λ∗: (annihilation) x ∗y = δxy for x, y = p, q, wi, (swapping) !(u)wi = wi!ei (u), where δxy is the Kronecker operator, ei is an integer associated with wi called the lift of wi, i is called the name of wi and we will often write wi,ei to explicitly note the lift of the generator. Iterated morphism ! represents the applicative depth of the target node. The lift of an exponential operator corresponds to the difference of applicative depths between the source and target nodes.

slide-31
SLIDE 31

STABLE FORMS and EXECUTION FORMULA

  • Orienting annihilation and swapping equations from left to right,

we get a rewriting system which is terminating and confluent.

  • The non-zero normal forms, known as stable forms, are the

terms ab∗ where a and b are positive (i.e., written without ∗s).

  • The fact that all non-zero terms are equal to such an ab∗ form is

referred to as the “AB∗ property”. From this, one easily gets that the word problem is decidable and that Λ∗ is an inverse monoid.

Definition (Execution Formula)

EX (RT ) =

  • φij ∈P(RT )

W (φij) where φij is the formal sum of all possible paths from node i to node j.

slide-32
SLIDE 32

PELCR EVALUATION

  • Evaluation as graph reduction technique: in the

algebraic interpretation of interaction rules, a lambda term is interpreted as a weighted graph.

⊤ λ @ X !d1 !dq !dq !dp !dp !dwi,ei

  • Parallel evaluation: the graph has to be distributed and

we distribute its nodes (and edges), thus a lambda term represents the program, the evaluation state and the network of communication channels. PELCR stands for Parallel Environment for optimal Lambda Calculus Reduction introduced in [PediciniQuaglia2007].

slide-33
SLIDE 33

PELCR SPEEDUP (DD4 run time)

DD4 is the computation of the (shared) normal form of (δ)(δ)4 where δ := λx(x)x and 4 := λf λx(f )4x.

slide-34
SLIDE 34

DD4 SPEEDUP (speed vs number of PEs)

slide-35
SLIDE 35

but... on this job (EXP3)

slide-36
SLIDE 36

EXP3 - single CPU workload

slide-37
SLIDE 37

EXP3 - run-time vs number of processors

slide-38
SLIDE 38

EXP3 - workload on 4 CPUs

slide-39
SLIDE 39

Super-linear speedup

slide-40
SLIDE 40

Pluggers

Results in parallel execution were obtained by using a directed version of GOI, we shortly describe this version with the help of the algebra of unification.

slide-41
SLIDE 41

Pluggers

Results in parallel execution were obtained by using a directed version of GOI, we shortly describe this version with the help of the algebra of unification. Computation is performed by connecting pluggers:

slide-42
SLIDE 42

Pluggers

Results in parallel execution were obtained by using a directed version of GOI, we shortly describe this version with the help of the algebra of unification. Computation is performed by connecting pluggers: Two kinds of pluggers, corresponding to polarities: + and −

slide-43
SLIDE 43

Double Pluggers

Pluggers are present on both sides. There is a direction for composition.

slide-44
SLIDE 44

Combinations of Double Pluggers

Pluggers on the two sides can be of any polarity: therefore we can have the following four types.

slide-45
SLIDE 45

Plugging instructions

Pluggers can be connected by following instructions, here represented by terms.

slide-46
SLIDE 46

Plugging instructions

Pluggers can be connected by following instructions, here represented by terms. For instance, t = p(x) and u = p(p(x)).

slide-47
SLIDE 47

Instruction Compliance in Connections

Unification of terms is the performing criterion for plugging.

slide-48
SLIDE 48

Instruction Compliance in Connections

Unification of terms is the performing criterion for plugging. If t = p(x1) and u = p(p(x2)), then θ′ = x2 → x3 and θ = x1 → p(x3).

slide-49
SLIDE 49

Execution Task

The minimal task during execution consists in connecting a plugger u against all compatible pluggers (same color = same node) by following instructions t1, t2, . . . tn.

slide-50
SLIDE 50

Execution Task

The minimal task during execution consists in connecting a plugger u against all compatible pluggers (same color = same node) by following instructions t1, t2, . . . tn.

slide-51
SLIDE 51

Execution Task

The minimal task during execution consists in connecting a plugger u against all compatible pluggers (same color = same node) by following instructions t1, t2, . . . tn. Pluggers whose connection works become new connection tasks (pairs of new double pluggers, to be connected) decorated with the corresponding residual instructions.

slide-52
SLIDE 52

DVR - Directed Virtual Reduction

The way to perform GOI, we shortly introduced, is indeed the so called half-combustion strategy which is derived by DVR introduced by Danos et al. in 1997. HC-combustion was implemented in PELCR before the PPDP paper (PediciniQuaglia2000) and then presented in PediciniQuaglia2007.

slide-53
SLIDE 53

A bridging model

We introduce a formal description for multicore “functional” computation as a step to quantitatively study the behaviour

  • f the PELCR implementation.
slide-54
SLIDE 54

A bridging model

We introduce a formal description for multicore “functional” computation as a step to quantitatively study the behaviour

  • f the PELCR implementation.

We already know that PELCR is sound as a “parallel”

  • perational semantics, this means that we do not care on

reordering of actions since the computation of the normal form by using Geometry of interaction rules (shared optimal reduction) is local and asynchroous.

slide-55
SLIDE 55

A bridging model

We introduce a formal description for multicore “functional” computation as a step to quantitatively study the behaviour

  • f the PELCR implementation.

We already know that PELCR is sound as a “parallel”

  • perational semantics, this means that we do not care on

reordering of actions since the computation of the normal form by using Geometry of interaction rules (shared optimal reduction) is local and asynchroous.

Definition (PELCR Actions)

Given a dynamic graph G, which is a graph G = (V , E ⊂ V × V ) with edges labeled on the Girard dynamic algebra Λ∗, we define an action α on G as ǫ, e, w where ǫ ∈ {+, −}, e = (vt, vs) is a pair of nodes in G and w ∈ Λ∗.

slide-56
SLIDE 56

PELCR-VM

We describe the pelcr virtual machine (PVM) as an abstract machine working on its state (C, D).

  • C contains the computational task: a stream of

closures (FIFO).

  • A closure is a signed edge.
  • An edge α = (s, t, w), a signed edge αε is an edge with a

polarity ε ∈ {+, −}; s and t are memory addresses, and w is a weight in the dynamic algebra.

  • D represents the current memory, and contains

environment elements.

  • any environment element has a memory address ei and is

called node.

  • memory ei contains signed edges αεi

i .

current graph/state of the machine vt v1 v2 v3 vm w1 w2 w3 wm pending actions vt vs w

slide-57
SLIDE 57

PELCR in SECD style

0 reading from the input interface:

(0, NULL, nil, ∅) → (0, NULL, read(), ∅)

slide-58
SLIDE 58

PELCR in SECD style

0 reading from the input interface:

(0, NULL, nil, ∅) → (0, NULL, read(), ∅)

1 action α extraction from stream C:

(0, NULL, α :: C′, D) →

  • (α, NULL, C′, D)

if α = 0, (0, NULL, C′, D) if α = 0

slide-59
SLIDE 59

PELCR in SECD style

0 reading from the input interface:

(0, NULL, nil, ∅) → (0, NULL, read(), ∅)

1 action α extraction from stream C:

(0, NULL, α :: C′, D) →

  • (α, NULL, C′, D)

if α = 0, (0, NULL, C′, D) if α = 0

2 action α’s environment access:

(α, NULL, C, D) → (α, vt, C, D′) where α = ǫ, e, w, the edge is e = (vt, vs) and D′ =

  • D

if vt already is a node of D, D ∪ {vt} if vt is a new node to be added to D.

slide-60
SLIDE 60

4 action execution

(α, vt, C, D) =

  • (0, NULL, C, D′)

if X is empty (0, NULL, C ⊗ X , D′) if X = ∅ where let be X = execute(α) the set of residuals of the action α on its context v −ǫ

t

and D′ = D ∪ {((vt, vs)ǫ, w)}

vs vt v1 v2 v3 → vm vs v ′

3

v1 v2 v3 vm v ′

1

v ′

2

v ′

m

w w1 w2 w3 wm w ′

3

w 3 w ′

1

w 1 w ′

2

w 2 w ′

m

w m

Note that v ′

i are new nodes introduced by the execution step,

that can be freely allocated on one of the processing element.

slide-61
SLIDE 61

Parallel Abstract Machines

We show a parallel machine with two computing units, whose state is therefore represented by (S, E, C, D) = (S1 ⊗ S2, E1 ⊗ E2, C1 ⊗ C2, D1 ⊗ D2).

D1 read() write() D2 ZIP ZIP

slide-62
SLIDE 62

Synchronous Machine

0 read from input stream

(0 ⊗ 0, NULL ⊗ NULL, nil ⊗ nil, ∅ ⊗ ∅) → (0 ⊗ 0, NULL ⊗ NULL, read() ⊗ nil, ∅ ⊗ ∅)

1 actions α1 and α2 are synchronously extracted from

streams C1 and C2 (0 ⊗ 0, NULL ⊗ NULL, α1 :: C′

1 ⊗ α2 :: C′ 2, D1 ⊗ D2) →

(α1 ⊗ α2, NULL ⊗ NULL, C′

1 ⊗ C′ 2, D1 ⊗ D2)

slide-63
SLIDE 63

Synchronous Machine (cont.)

3 simultaneous environment access for both actions:

(α1 ⊗ α2, NULL ⊗ NULL, C1 ⊗ C2, D1 ⊗ D2) → (α1 ⊗ α2, v 1

t ⊗ v 2 t , C1 ⊗ C2, D′ 1 ⊗ D′ 2)

when αi = ǫi, ei, wi and either ei = (v i

t , v i s) or v i t is

undefined if αi = 0 then D′

i =

  • Di

if v i

t already is a node of Di,

Di ∪ {v i

t }

if v i

t is a new node to be added to Di. 4 actions execution

(α1 ⊗ α2, v 1

t ⊗ v 2 t , C1 ⊗ C2, D1 ⊗ D2) →

(0⊗0, NULL⊗NULL, ((C1 ⊗ execute1(α1)) ⊗ execute1(α2)) ⊗ ⊗ ((C2 ⊗ execute2(α1)) ⊗ execute2(α2)) , D′

1 ⊗ D′ 2)

The graph D′

i = Di ∪ ((v i t , v i s)ǫi ), wi).

slide-64
SLIDE 64

Aynchronous Machine

The state of the asynchronous machine is annotated with the scheduled processing unit: (p, S, E, C, D) = (p, S1 ⊗ S2, E1 ⊗ E2, C1 ⊗ C2, D1 ⊗ D2) where p ∈ {1, 2} is the order number of the scheduled processor. The sequence of controls p is by itself a stream (of integers {1, 2}). We may either choose a random sequence or we may force a particular scheduling by explicitly giving it.

slide-65
SLIDE 65

Asynchronous parallel SECD

0 reading from the input interface:

(1, 0 ⊗ 0, NULL ⊗ NULL, nil ⊗ nil, ∅ ⊗ ∅) → (1, 0 ⊗ 0, NULL ⊗ NULL, read() ⊗ nil, ∅ ⊗ ∅)

1 action αp extraction from the stream Cp:

(p, S1 ⊗ S2, E1 ⊗ E2, C1 ⊗ C2, D1 ⊗ D2) → (p′, S′

1 ⊗ S′ 2, E ′ 1 ⊗ E ′ 2, C′ 1 ⊗ C′ 2, D′ 1 ⊗ D′ 2)

if Sp = 0, Ep = NULL, Cp = αp :: C′

p then

S′

i =

  • Si

if i = p αi if i = p E ′

i = Ei, C′ i = Ci if i = p and D′ i = Di, finally p′ is taken in

accord to the scheduling function.

slide-66
SLIDE 66

Asynchronous parallel SECD (cont.)

2 action αp’s environment access:

(p, S1 ⊗ S2, E1 ⊗ E2, C1 ⊗ C2, D1 ⊗ D2) → (p′, S′

1 ⊗ S′ 2, E ′ 1 ⊗ E ′ 2, C′ 1 ⊗ C′ 2, D′ 1 ⊗ D′ 2)

when Sp = αp = ǫp, ep, wp, where E ′

i =

  • Ei

if i = p v p

t

if i = p S′

i = Si, C′ i = Ci and

D′

i =

  • Di

if i = p or i = p and v p

t ∈ Dp,

Di ∪ {v p

t }

if i = p and v p

t ∈ Dp.

slide-67
SLIDE 67

Asynchronous parallel SECD (cont.)

3 action execution:

(p, S1 ⊗ S2, E1 ⊗ E2, C1 ⊗ C2, D1 ⊗ D2) → (p′, S′

1 ⊗ S′ 2, E ′ 1 ⊗ E ′ 2, C′ 1 ⊗ C′ 2, D′ 1 ⊗ D′ 2)

when Sp = αp = ǫp, ep, wp, Ep = v p

t , then

S′

i =

  • Si

if i = p if i = p E ′

i =

  • Ei

if i = p NULL if i = p C′

i = Ci ⊗ executei(αp)

and the graph D′

i = Di for all i = p and D′ p is obtained from

Dp by adding the edge ((v p

t , v p s ) ǫp, wp).

slide-68
SLIDE 68

Stream equivalence

Definition (node-view (or view of base v) of a stream of actions S)

Given a stream of actions S and a node v we define the stream Sv by selecting actions with target node v. More formally: Sv =          if S = 0 S(0) :: shift(S)v if S(0) = ǫ, (v, vs), w shift(S)v if S(0) = ǫ, (vt, vs), w and v = vt

  • r S(0) = 0

Polarised view of base v ǫ by selecting actions with the opposite polarity with respect to the polarity of the base. Namely: Sv ǫ =          if S = 0 S(0) :: shift(S)v ǫ if S(0) = −ǫ, (v, vs), w shift(S)v ǫ if S(0) = ǫ, (vt, vs), w and v = vt

  • r S(0) = 0
slide-69
SLIDE 69

Execution equivalence

Definition

The states (S1, E1, C1, D1) and (S2, E2, C2, D2) of two machines M1 and M2 are ordered w.r.t if

1 there is a graph-isomorphism φ between D1 and a

sub-graph of D2 such that the weights and polarities are preserved, and

2 for any node w ∈ φ(D1) we have that equivalent views on

the controls (the two streams of actions) when taking v and its corresponding node φ(v), (C1)v ≈ (C2)φ(v), and

Theorem

Given a (sequential) machine M1 and a (parallel) machine M2 such that M1 ≃σ M2 by the isomorphism φ, then we have that v.M1 ≃σ φ(v).M2.

slide-70
SLIDE 70

LOAD BALANCING and AGGREGATION

Distribution the evaluation is obtained by

  • Processing Elements (PE) with separate running PVMs;
  • Global Memory Address Space for the environments;
  • Message Communication Layer for streaming among

PEs. Issues we have considered:

  • Granularity: fine grained vs. coarse grained;
  • Load Balancing: liveness, avoid deadlocks.
slide-71
SLIDE 71

ARCHITECTURE

  • Multicore: the type of parallelism we considered is MIMD,

and it behaves very well on modern multicore machines (super-linear speedup !!);

  • Vectorial: there is space for further improving the

evaluation strategy to cope with vectorial parallelism like in

  • Cell: evolution of the power-pc architecture developed by

IBM-SONY-TOSHIBA (and used in BlueGene and PS3);

  • FPGA: arrays of programmable logic gates;
  • GPU: in graphics cards many computational cores can be

executed.

slide-72
SLIDE 72

Thanks!

slide-73
SLIDE 73

References

Beniamino Accattoli, Pablo Barenbaum, and Damiano Mazza. Distilling abstract machines. In Proceedings of The 19th ACM SIGPLAN International Conference on Functional Programming, 2014. Andrea Asperti and Juliusz Chroboczek. Safe operators: Brackets closed forever optimizing optimal lambda-calculus implementations.

  • Appl. Algebra Eng. Commun. Comput., 8(6):437–468,

1997. Andrea Asperti, Cecilia Giovanetti, and Andrea Naletto. The Bologna Optimal Higher-order Machine. Journal of Functional Programming, 6(6):763–810, 1996.

  • V. Danos and L. Regnier.

Proof-nets and the hilbert space. In Advances in Linear Logic, pages 307–328. Cambridge University Press, 1995.

slide-74
SLIDE 74

Jean-Yves Girard. Geometry of interaction I. Interpretation of system F. In Logic Colloquium ’88 (Padova, 1988), volume 127 of

  • Stud. Logic Found. Math., pages 221–260. North-Holland,

Amsterdam, 1989. Jean-Yves Girard. Geometry of interaction II. Deadlock-free algorithms. In COLOG-88 (Tallinn, 1988), volume 417 of Lecture Notes in Comput. Sci., pages 76–93. Springer, Berlin, 1990. Georges Gonthier, Martín Abadi, and Jean-Jacques Lévy. The geometry of optimal lambda reduction. In Conference Record of the Nineteenth Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, pages 15–26, Albequerque, New Mexico, 1992.

  • J. Roger Hindley and Jonathan P

. Seldin. Introduction to combinators and λ-calculus, volume 1 of London Mathematical Society Student Texts. Cambridge University Press, Cambridge, 1986.

slide-75
SLIDE 75

Peter J. Landin. The mechanical evaluation of expressions. Computer Journal, 6(4):308–320, January 1964. Ian Mackie. The geometry of interaction machine. In Proceedings of the 22nd ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, pages 198–208. ACM, 1995. Marco Pedicini and Francesco Quaglia. PELCR: parallel environment for optimal lambda-calculus reduction. ACM Trans. Comput. Log., 8(3):Art. 14, 36, 2007. Laurent Regnier. Lambda-calcul et réseaux. PhD thesis, Paris 7, 1992.

slide-76
SLIDE 76

J.J.M.M. Rutten. A tutorial on coinductive stream calculus and signal flow graphs. Theoretical Computer Science, 343(3):443 – 481, 2005. Leslie G. Valiant. A bridging model for multi-core computing.

  • J. Comput. System Sci., 77(1):154–166, 2011.