Logic and Computation
Lecture 4
Zena M. Ariola
University of Oregon
24th Estonian Winter School in Computer Science, EWSCS ’19
Logic and Computation Lecture 4 Zena M. Ariola University of - - PowerPoint PPT Presentation
Logic and Computation Lecture 4 Zena M. Ariola University of Oregon 24th Estonian Winter School in Computer Science, EWSCS 19 Lessons learned Construction-destruction in programming How to implement strong reduction efficiently How to
Lecture 4
Zena M. Ariola
University of Oregon
24th Estonian Winter School in Computer Science, EWSCS ’19
Construction-destruction in programming How to implement strong reduction efficiently How to formulate join-point in an intermediate language Recursion and co-recursion A unifying theme for existing programming idioms: abstraction, session types, Church encodings, lazy evaluation and object-oriented programming Which connectives do you include in an intermediate language?
Defined by rules of creation (constructors). Like algebraic data types in ML and Haskell data Either a b where Lef :: a ⊢ Either a b Right :: b ⊢ Either a b Producer: fixed shapes given by constructors Lef(v1) Right(v2) Consumer: case analysis on constructions ˜ µ[Lef(x).c1| Right(y).c2] case v { Lef(x).v1 | Right(y).v2}
Defined by rules of destruction (messages). codata a & b where π1 : a & b ⊢ a π2 : a & b ⊢ b Consumer: fixed shapes given by the destructors π1[e1] v.π1 π2[e2] v.π2 Producer: case analysis on destructors. “If I’m asked for first, do this” “If I’m asked for second, do that” µ(π1[α].c1|π2[β].c2) {π1 → v1, π2 → v2}
What do you want to do with a boolean?
What do you want to do with a boolean? If b then x else y codata Bool where If : Bool, a, a ⊢ a The producer will then patern match on the request: µ(If (x, y).v) or using an OO-syntax {If (x, y) → v} True and false then become: {If (x, y) → x} {If (x, y) → y} With co-paterns: true . If x y = x false . If x y = y
What do you want to do with a boolean? If b then x else y codata Bool where If : Bool, a, a ⊢ a The producer will then patern match on the request: µ(If (x, y).v) or using an OO-syntax {If (x, y) → v} True and false then become: {If (x, y) → x} {If (x, y) → y} With co-paterns: true . If x y = x false . If x y = y We arrive at the familiar encodings in the polymorphic λ-calculus: Bool = ∀a.a → a → a true = Λa.λx:a.λy:a.x false = Λa.λx:a.λy:a.y
What do you want to do with a boolean? If b then x else y codata Bool where If : Bool, a, a ⊢ a The producer will then patern match on the request: µ(If (x, y).v) or using an OO-syntax {If (x, y) → v} True and false then become: {If (x, y) → x} {If (x, y) → y} With co-paterns: true . If x y = x false . If x y = y We arrive at the familiar encodings in the polymorphic λ-calculus: Bool = ∀a.a → a → a true = Λa.λx:a.λy:a.x false = Λa.λx:a.λy:a.y The same applies to other data types - Visitor Patern
codata a → b where call : a → b, a ⊢ b Consumer: one destructor (function call)
Argument What to do with result
v · e (v1.call) v Producer: consider shape of the observation µ(x · α).c λx.v = µ(x · α).v| |α
Extensionality is essential for observational properties about program M : A → B M = λx.M x
Extensionality is essential for observational properties about program M : A → B M = λx.M x Evaluation to weak-head normal (i.e. a lambda) form avoids renaming
Extensionality is essential for observational properties about program M : A → B M = λx.M x Evaluation to weak-head normal (i.e. a lambda) form avoids renaming Extensionality and weak reduction are inconsistent in call-by-name: λx.Ω x = Ω Ω loops forever while λx.Ω x is done.
Extensionality is essential for observational properties about program M : A → B M = λx.M x Evaluation to weak-head normal (i.e. a lambda) form avoids renaming Extensionality and weak reduction are inconsistent in call-by-name: λx.Ω x = Ω Ω loops forever while λx.Ω x is done. Plotkin call-by-name continuation-passing style transformation (CPS) does not validates the η axiom: [ [M] ] [ [λx.M x] ] = λk.k(λx.λq. [ [M] ] (λv.v x q))
Extensionality is essential for observational properties about program M : A → B M = λx.M x Evaluation to weak-head normal (i.e. a lambda) form avoids renaming Extensionality and weak reduction are inconsistent in call-by-name: λx.Ω x = Ω Ω loops forever while λx.Ω x is done. Plotkin call-by-name continuation-passing style transformation (CPS) does not validates the η axiom: [ [M] ] [ [λx.M x] ] = λk.k(λx.λq. [ [M] ] (λv.v x q))
Thielecke proves that η follows from parametricity Hoffman and Streicher have proposed an alternative CPS based on pairs [ [λx.M x] ] = λ(x, k). [ [M x] ] k = λ(x, k). [ [M] ] (x, k) = [ [M] ]
A function is not constructed, it is a destructor of the calling context λx.M µ[x · α].M| |α Recall let (x, y) = v in v′ = let z = vinv′[π1(z)/x, π2(z)/y] So... µ(x · α).c| |E = c[car E/x, cdr E/α] We then have: λx.Ω x
|x · α patern matching as projections µ[β].Ω| |car β · cdr β call stack is surjective µ[β].Ω| |β Ω
The continuation is a data structure not a function. [ [M N] ] = λk. [ [M] ] λf .f [ [N] ] k [ [λx.M] ] = λk.k(λx. [ [M] ])
The continuation is a data structure not a function. [ [M N] ] = λk. [ [M] ] λf .f [ [N] ] k [ [λx.M] ] = λk.k(λx. [ [M] ]) [ [M N] ] = λk. [ [M] ] ([ [N] ] , k) [ [λx.M] ] = λ(x, k). [ [M] ] k The same solution restores confluence to λ-calculi with control λx.(Abort 0) x Abort 0 λx.Abort 0
if x > 100 : print "x is large" else : print "x is small" print "goodbye"
x > 100 print "x␣ is ␣ large " print "x␣ is ␣small" print "goodbye" yes no
Join Point: A common point where several branches of control flow join together (φ node in SSA)
if x > 100 : print "x is large" else : print "x is small" print "goodbye"
x > 100 print "x␣ is ␣ large " print "x␣ is ␣small" print "goodbye" yes no
Join Point: A common point where several branches of control flow join together (φ node in SSA) Disadvantage of continuation-passing style: bakes in the evaluation strategy Join points are very special functions:
if x > 100 : print "x is large" else : print "x is small" print "goodbye"
x > 100 print "x␣ is ␣ large " print "x␣ is ␣small" print "goodbye" yes no
Join Point: A common point where several branches of control flow join together (φ node in SSA) Disadvantage of continuation-passing style: bakes in the evaluation strategy Join points are very special functions: a) Always tail-called, don’t return b) Never escape their scope Different operational reading: just a jump to a labeled block of code Join points are more efficient to implement, less costly than a full closure
In Core: let j x y = big in not(case z of A x y → j x y B → False) ⇓ let j x y = big in case z of A x y → not (j x y) B → not False This is bad! The join point is ruined (j no longer tail-called)
In Core with join points: let j x y = big in not(case z of A x y → j x y B → False) ⇓ let j x y = not big in case z of A x y → j x y B → not False This is much beter! The join point is preserved!
The current intermediate language of Haskell has join points (FJ) FJ is still in correspondence with minimal logic; it doesn’t have control Join points are preserved through optimizations We replace Flanagan et al. moto: Think in CPS, work in direct style with Think in sequent calculus, work in direct style.
In recursion, data is constructed and the consumer is a process that uses the data Data is finite and its use is (potentially) infinite Recursion uses the data, rather than producing it Recursion starts big, and reduces down to a base case
In recursion, data is constructed and the consumer is a process that uses the data Data is finite and its use is (potentially) infinite Recursion uses the data, rather than producing it Recursion starts big, and reduces down to a base case In corecursion, the use of data is constructed and the producer is a process that generates the data The use of data is finite and its creation is (potentially) infinite Corecursion produces the data, rather than using it Corecursion starts from a seed and produces bigger and bigger internal state
data List = Nil | Cons Nat List Data is finite, but we need recursion in the consumer (like in G¨
E ::= α | ˜ µx.c | v.E | rec(Nil ⇒ vb, Cons(x, xs) ⇒ y.v)[E]
data List = Nil | Cons Nat List Data is finite, but we need recursion in the consumer (like in G¨
E ::= α | ˜ µx.c | v.E | rec(Nil ⇒ vb, Cons(x, xs) ⇒ y.v)[E]
Nil| |rec(Nil ⇒ vb, Cons(x, xs) ⇒ y.v)[E] → vb| |E Cons(x, xs)| |rec(Nil ⇒ vb, Cons(x, xs) ⇒ y.v)[E] → xs| |rec(Nil ⇒ vb, Cons(x, xs) ⇒ y.v)[˜ µy.v| |E]
data List = Nil | Cons Nat List Data is finite, but we need recursion in the consumer (like in G¨
E ::= α | ˜ µx.c | v.E | rec(Nil ⇒ vb, Cons(x, xs) ⇒ y.v)[E]
Nil| |rec(Nil ⇒ vb, Cons(x, xs) ⇒ y.v)[E] → vb| |E Cons(x, xs)| |rec(Nil ⇒ vb, Cons(x, xs) ⇒ y.v)[E] → xs| |rec(Nil ⇒ vb, Cons(x, xs) ⇒ y.v)[˜ µy.v| |E]
length = λ(x, α).x| |rec(Nil ⇒ 0, Cons(_, xs) ⇒ y.y + 1)[α], which we abbreviate as λ(x, α).x| |rec(0, y.y + 1)[α]
data List = Nil | Cons Nat List Data is finite, but we need recursion in the consumer (like in G¨
E ::= α | ˜ µx.c | v.E | rec(Nil ⇒ vb, Cons(x, xs) ⇒ y.v)[E]
Nil| |rec(Nil ⇒ vb, Cons(x, xs) ⇒ y.v)[E] → vb| |E Cons(x, xs)| |rec(Nil ⇒ vb, Cons(x, xs) ⇒ y.v)[E] → xs| |rec(Nil ⇒ vb, Cons(x, xs) ⇒ y.v)[˜ µy.v| |E]
length = λ(x, α).x| |rec(Nil ⇒ 0, Cons(_, xs) ⇒ y.y + 1)[α], which we abbreviate as λ(x, α).x| |rec(0, y.y + 1)[α]
Example Cons(1, Cons(2, Cons(3, Nil)))| |rec(0, _.y.y + 1)[tp] → Cons(2, Cons(3, Nil))| |rec(0, _.y.y + 1)[˜ µy.y + 1| |tp] → Cons(3, Nil)| |rec(0, _.y.y + 1)[˜ µy.y + 1| |˜ µy.y + 1| |tp] → Nil| |rec(0, _.y.y + 1)[˜ µy.y + 1| |˜ µy.y + 1| |˜ µy.y + 1| |tp] → 0| |˜ µy.y + 1| |˜ µy.y + 1| |˜ µy.y + 1| |tp → 1| |˜ µy.y + 1| |˜ µy.y + 1| |tp → 2| |˜ µy.y + 1| |tp → 3| |tp
codata Stream where Head : Nat Tail : Stream Codata is finite, but we need co-recursion in the producer: rec(Nil ⇒ vb, Cons(x, xs) ⇒ y.v)[E]
codata Stream where Head : Nat Tail : Stream Codata is finite, but we need co-recursion in the producer: rec(Nil ⇒ vb, Cons(x, xs) ⇒ y.v)[E] corec(Head[α] ⇒ ev, Tail[α] ⇒ β.e)[v] Cons(x, xs)| |rec(Nil ⇒ vb, Cons(x, xs) ⇒ y.v)[E] → xs| |rec(Nil ⇒ vb, Cons(x, xs) ⇒ y.v)[˜ µy.v| |E
corec(Head[α] ⇒ eb, Tail[α] ⇒ β.e)[V]| |Head[E] → V| |eb[E/α] corec(Head[α] ⇒ eb, Tail[α] ⇒ β.e)[V]| |Tail[E] → corec(Head[α] ⇒ eb, Tail[α] ⇒ β.e)[µβ.V| |e[E/α]]| |E
corec(Head[α] ⇒ eb, Tail[α] ⇒ β.e)[V]| |Head[E] → V| |eb[E/α] corec(Head[α] ⇒ eb, Tail[α] ⇒ β.e)[V]| |Tail[E] → corec(Head[α] ⇒ eb, Tail[α] ⇒ β.e)[µβ.V| |e[E/α]]| |E
Example The stream’s of 0′s is represented as corec(Head[α] ⇒ α, Tail[α] ⇒ β.β)[0] The set co-nat of natural numbers plus ∞ is represented as conat=corec(Head[α] ⇒ α, Tail[α] ⇒ β.˜ µx.x + 1| |β)[0] Let’s take the third elements of conat:
corec(Head[α] ⇒ α, Tail[α] ⇒ β.˜ µx.x + 1| |β)[0]| |Tail.Tail.Head.tp → corec(Head[α] ⇒ α, Tail[α] ⇒ β.˜ µx.x + 1| |β)[µβ.0| |˜ µx.x + 1| |β| |Tail.Head.tp → corec(Head[α] ⇒ α, Tail[α] ⇒ β.˜ µx.x + 1| |β)[µβ.1| |β]| |Tail.Head.tp → corec(Head[α] ⇒ α, Tail[α] ⇒ β.˜ µx.x + 1| |β)[1]| |Tail.Head.tp → corec(Head[α] ⇒ α, Tail[α] ⇒ β.˜ µx.x + 1| |β)[2]| |Head.tp → 2| |tp
Hughes in "Why Functional Programming maters" motivates the utility of practical functional programming through its excellence in compositionality
Hughes in "Why Functional Programming maters" motivates the utility of practical functional programming through its excellence in compositionality Compositionality is reached through the technique of demand-driven programming
Hughes in "Why Functional Programming maters" motivates the utility of practical functional programming through its excellence in compositionality Compositionality is reached through the technique of demand-driven programming Haskell uses lazy evaluation to implement demand-driven programming.
Hughes in "Why Functional Programming maters" motivates the utility of practical functional programming through its excellence in compositionality Compositionality is reached through the technique of demand-driven programming Haskell uses lazy evaluation to implement demand-driven programming. Disadvantages?
Hughes in "Why Functional Programming maters" motivates the utility of practical functional programming through its excellence in compositionality Compositionality is reached through the technique of demand-driven programming Haskell uses lazy evaluation to implement demand-driven programming. Disadvantages? A language should directly support the capability of yielding control to the consumer independently of the language being strict or lazy.
Hughes in "Why Functional Programming maters" motivates the utility of practical functional programming through its excellence in compositionality Compositionality is reached through the technique of demand-driven programming Haskell uses lazy evaluation to implement demand-driven programming. Disadvantages? A language should directly support the capability of yielding control to the consumer independently of the language being strict or lazy.
Reynolds identified two different mechanisms to achieve abstraction: abstract data types and procedural abstraction. Abstract data types are crisply expressed by existential types What is the essence of procedure abstraction?
Reynolds identified two different mechanisms to achieve abstraction: abstract data types and procedural abstraction. Abstract data types are crisply expressed by existential types What is the essence of procedure abstraction? CODATA!
Reynolds identified two different mechanisms to achieve abstraction: abstract data types and procedural abstraction. Abstract data types are crisply expressed by existential types What is the essence of procedure abstraction? CODATA! Example
codata Set where IsEmpty : Set −> Bool Contains : Set −> Int −> Bool Insert : Set −> Int −> Set Union : Set −> Set −> Set finiteSet : List Int −> Set ( finiteSet xs ). IsEmpty = xs == [] ( finiteSet xs ). Contains y = elemOf xs y ( finiteSet xs ). Insert y = finiteSet (y:xs) ( finiteSet xs ). Union s = fold (\ x t −> t. Insert x) s xs emptySet = finiteSet []
The extension of data types with indexes has proven useful to statically verify a data structure’s invariant, like for red-black trees With indexed data types, a programmer can constrain the way an object is constructed.
The extension of data types with indexes has proven useful to statically verify a data structure’s invariant, like for red-black trees With indexed data types, a programmer can constrain the way an object is constructed. With indexed codata types, a programmer can constrain the way an object is going to be used In a language with type indexes, codata enables a programmer to express more information in the interface of an abstraction. Example
index Raw, Bound, Live codata Socket i where Bind : Socket Raw −> String −> Socket Bound Connect : Socket Bound −> Socket Live Send : Socket Live −> String −> () Receive : Socket Live −> String Close : Socket Live −> ()
Internal and external choice correspond to data and codata.
Internal and external choice correspond to data and codata. Choosing between internal and external choice corresponds to the expression problem
Internal and external choice correspond to data and codata. Choosing between internal and external choice corresponds to the expression problem The same call-and-return dialog between a client and server occurs in both data and codata Example
queue = ?{ enq : ? string .queue , deq : !{ none: unit , some: ! string .queue } }
can be translated as:
codata Qeue where Enq : Qeue −> (String −> Qeue) Deq : Qeue −> Answer data Answer where None : () −> Answer Some : ( String , Qeue) −> Answer
Which connectives should we have in an intermediate language to support any data and codata type the user may define in a classical call-by-name or by-value language?
Which connectives should we have in an intermediate language to support any data and codata type the user may define in a classical call-by-name or by-value language? Positive: ⊕, 0 ⊗, 1 0 has zero constructors and the eliminator is ˜ µ(). In natural deduction form case M of . no right rule Γ | ˜ µ() : 0 ⊢ ∆ 0L Γ ⊢ () : 1 | ∆ 1R c : Γ ⊢ ∆ Γ | ˜ µ().c : 1 ⊢ ∆ 1L Negative: &, ⊤ `, ⊥ The ⊤ is the dual of 0. no lef rule Γ ⊢ µ() : ⊤ ⊢ ∆ ⊤R Γ | tp : ⊥ ⊢ ∆ ⊥L c : (Γ ⊢ ∆) Γ ⊢ µ(tp).c : ⊥ | ∆ ⊥R and shifs.