equational programming 2020 11 02 lecture 3 lambda-calculus in a - - PowerPoint PPT Presentation

equational programming 2020 11 02 lecture 3 lambda
SMART_READER_LITE
LIVE PREVIEW

equational programming 2020 11 02 lecture 3 lambda-calculus in a - - PowerPoint PPT Presentation

equational programming 2020 11 02 lecture 3 lambda-calculus in a nutshell -terms: variable x constant c abstraction ( x . M ) application ( F M ) we consider -terms modulo -conversion -reduction rule: ( x . M ) N M [ x


slide-1
SLIDE 1

equational programming 2020 11 02 lecture 3

slide-2
SLIDE 2

lambda-calculus in a nutshell

λ-terms: variable x constant c abstraction (λx. M) application (F M) we consider λ-terms modulo α-conversion β-reduction rule: (λx. M) N →β M[x := N]

slide-3
SLIDE 3

beta reduction

is non-deterministic however: gives unique normal forms see (later): confluence is non-terminating however: there are normalizing strategies see: strategies

slide-4
SLIDE 4

beta-normal form: definition and lemma

intuition: a normal form (NF) is a result of a computation definition: a λ-term is a β-normal form if it does not contain a β-redex so it cannot do a β-reduction step lemma: M is a β-normal form if and only if M = λx. M0 with M0 a normal form, or M = x M1 . . . Mn with n ≥ 0 and M1, . . . , Mn normal forms

slide-5
SLIDE 5

termination: definitions

M is terminating or strongly normalizing if all β-reduction sequences starting in M are finite M is weakly normalizing if there exists a β-reduction sequence starting in M that ends in a β-normal form

  • r: M has a normal form

termination and weak normalization are undecidable M is a normal form implies M has a normal form but not vv

slide-6
SLIDE 6

weak and strong normalization: quiz

(λx. x x)(λfy. (f y)) is SN (λx. x Ω) (λy. z) is WN but not SN (λx. x Ω) (λy. y) is not WN (λx. x x x) (λx. x x x) is not WN

slide-7
SLIDE 7
  • verview

fixed point combinators strategies call by value or leftmost-innermost call by need lazy reduction

slide-8
SLIDE 8

fixed point

definition: x ∈ A is a fixed point of f : A → B if f (x) = x examples: 0 and 1 are fixed points of f : R → R with x → x2 for λ-calculus: M is a fixed point of F if F M =β M example: every term M is a fixed point of I

slide-9
SLIDE 9

towards fixed points

take a term F and consider W = λx. F (x x) we have: W W = (λx. F (x x)) W →β F (W W ) so F (W W ) =β W W

slide-10
SLIDE 10

fixed point combinator

definition: Y is een fixed point combinator if F (Y F) =β Y F for every λ-term F informally: we can use Y to construct a fixed point for a given term F

slide-11
SLIDE 11

fixed point combinators

Curry’s fixed point combinator: Y = λf . (λx. f (x x)) (λx. f (x x)) Turing’s fixed point combinator: T = (λx. λy. y (x x y)) (λx. λy. y (x x y))

slide-12
SLIDE 12

consider Curry’s fixed point combinator

for an arbirary F we have: Y F = (λf . (λx. f (x x)) (λx. f (x x))) F →β (λx. F (x x)) (λx. F (x x)) →β F ((λx. F (x x)) (λx. F (x x))) ← F ((λf . (λx. f (x x)) (λx. f (x x))) F) = F (Y F) and also: F ((λx. F (x x)) (λx. F (x x))) →β F (F ((λx. F (x x)) (λx. F (x x))))

slide-13
SLIDE 13

consider Turing’s fixed point combinator

for an arbitrary F we have: T F = (λx. λy. y (x x y)) (λx. λy. y (x x y)) F →β (λy. y (t t y)) F →β F (tt F ) = F (TF ) with t = λx. λy. y (x x y)

slide-14
SLIDE 14

use fixed point operator: example (Hindley)

question: define X such that X y =β X (a garbage disposer)

slide-15
SLIDE 15

use fixed point operator: example (Hindley)

question: define X such that X y =β X (a garbage disposer) X y =β X

slide-16
SLIDE 16

use fixed point operator: example (Hindley)

question: define X such that X y =β X (a garbage disposer) X y =β X follows from X =β λy. X

slide-17
SLIDE 17

use fixed point operator: example (Hindley)

question: define X such that X y =β X (a garbage disposer) X y =β X follows from X =β λy. X follows from X =β (λx. λy. x) X

slide-18
SLIDE 18

use fixed point operator: example (Hindley)

question: define X such that X y =β X (a garbage disposer) X y =β X follows from X =β λy. X follows from X =β (λx. λy. x) X follows from X =β Y (λx. λy. x)

slide-19
SLIDE 19

use fixed point operator: example (Hindley)

question: define X such that X y =β X (a garbage disposer) X y =β X follows from X =β λy. X follows from X =β (λx. λy. x) X follows from X =β Y (λx. λy. x) so define X = Y (λx. λy. x)

slide-20
SLIDE 20
  • verview

fixed point combinators strategies call by value or leftmost-innermost call by need lazy reduction

slide-21
SLIDE 21

reduction graph of a λ-term

terms are the vertices and (all) the reduction steps are the edges a reduction graph may be finite and cycle-free; example: I x a reduction graph may be finite with cycles; example: Ω a reduction graph may be infinite; example: (λx. x x x) (λx. x x x) a reduction graph is not necessarily simple; example: I (I I) a reduction graph may be nice to draw; example: (λx. I x x) (λx. I x x)

slide-22
SLIDE 22

strategy: informal definitions

there may be different ways to reduce a term a strategy tells us how to reduce a term more precisely: a subset of →β with the same normal forms a term may be weakly normalizing (WN) but not terminating (SN) a normalizing strategy yields a reduction to normal form if possible a perpetual strategy yields an infinite reduction if possible in general: a strategy gives us a reduction with a desired property

slide-23
SLIDE 23
  • utermost and innermost redexes

a redex occurrence is outermost if it is not contained in another redex all outermost redexes of a term are parallel parallel means: no redex is contained in another redex a redex occurrence is innermost if it does not contain another redex all innermost redexes of a term are parallel a set of parallel redexes has a unique leftmost redex which is the leftmost one in the term tree

slide-24
SLIDE 24

redexes properties

  • utermost redex

every term not in NF has an outermost redex and it has a unique leftmost-outermost redex innermost redex every term not in NF has an innermost redex and it has a unique leftmost-innermost redex lazy redex every term not in WHNF has a lazy redex lazy redex is unique

slide-25
SLIDE 25

call by value

reduce in every step the leftmost-innermost redex it is not normalizing: (λx. y) Ω →β (λx. y) Ω →β (λx. y) Ω →β . . . it does not copy redexes (example): (λx. f x x) (((λx. x) a)) →β (λx. f x x) a →β f a a it may contract redexes that are not needed: (λx. y) (I z) →β (λx. y) z →β y

slide-26
SLIDE 26

call by value: + and −

+ redexes are not copied + easy to implement

  • not normalizing
  • a reduction to normal form may reduce redexes that do not contribute

to normal form

slide-27
SLIDE 27

call by need or leftmost-outermost

reduce in every step the leftmost-outermost redex it is normalizing (theorem) example: (λx. f x x) ((λx. x) 3) →β f ((λx. x) 3) ((λx. x) 3) →β f 3 ((λx. x) 3) →β f 3 3 example: (λx. y) Ω →β y

slide-28
SLIDE 28

call by need: + and −

+ normalizing + all steps contribute to the normal form

  • redexes may be copied
  • difficult to implement
slide-29
SLIDE 29

call by need is normalizing

proof by Haskell B. Curry 1958 (also recent work) intuition: we get rid of the outermost redex only by reducing it Haskell (the programming language) uses a form of call by need

slide-30
SLIDE 30

the rightmost-outermost strategy

is not normalizing: ((λx. λy. x) I) Ω → ((λx. λy. x) I) Ω → . . .

slide-31
SLIDE 31

two strategies

call by value

  • r leftmost-innermost or applicative order

close to what is done in ML call by need

  • r leftmost-outermost or normal order

close to what is done in Haskel (but we will refine this more) example: consider Y u with Y = λf . (λx. f (x x)) (λx. f (x x))

slide-32
SLIDE 32

compare with Haskell: strategies and lazy evaluation

Haskell is a lazy functional programming language that is: do at least as possible, do it last moment, do it at most once how does this correspond to our lambda-calculus model ? very roughly: leftmost-outermost reduction now we refine this a bit

slide-33
SLIDE 33

evaluation in Haskell

  • utermost, however, built-in operators are evaluated first

example: 0 ∗ (head[1, 2]) first evaluates head because * is strict in λ-calculus: leftmost-outermost reduction strategy not below a lambda example: \x -> False && x is not evaluated in λ-calculus: reduction to weak head normal form sharing in λ-calculus: only idea

slide-34
SLIDE 34

lambda calculus closer to Haskell

  • ur basic model:

programs are terms, evaluation is β-reduction, a result is a normal form refined model closer to Haskell: programs are terms, evaluation is leftmost-outermost reduction (with sharing), a result is a weak head normal form (what is that?)

slide-35
SLIDE 35

weak head normal form (WHNF)

intuition of a WHNF: at least one symbol is stable under reduction definition of a WHNF: λx. P with P an arbitrary λ-term x P1 . . . Pn with n ≥ 0 and P1, . . . , Pn arbitrary λ-terms examples of WHNFs: x λx. x x Ω λx. Ω every normal form is a WHNF, but not necessarily the other way around

slide-36
SLIDE 36

sharing

intuition (technically non-trivial): (λx. x x) (I I) →β let x = I I in x x →β let x = I in x x → let x = I in I x →β let x = I in x → let x = I in I we do not consider sharing in the lambda-calculus part

slide-37
SLIDE 37

lazy evaluation in lambda calculus

goal: weak head normal form (WHNF) strategy: contract the lazy redex, that is: the leftmost-outermost redex of a term that is not in WHNF this is related to the evaluation strategy of Haskell

slide-38
SLIDE 38

infinite lists in Haskell use lazy evaluation

in Haskell lists may be finite or potentially infinite

  • nes = 1 : ones

we can also compute with infinite lists head ones gives output 1 thanks to lazy evaluation! (what would innermost reduction give?)

slide-39
SLIDE 39

strategies summary

leftmost-outermost contracts repeatedly the leftmost-outermost redex goal: normal form normalizing leftmost-innermost contracts repeatedly the leftmost-innermost redex goal: normal form not normalizing lazy contracts repeatedly the lazy redex goal: weak head normal form reaches weak head normal form whenever possible