Continuations, All The Way Down Tim Humphries Ambiata - - PowerPoint PPT Presentation

continuations all the way down
SMART_READER_LITE
LIVE PREVIEW

Continuations, All The Way Down Tim Humphries Ambiata - - PowerPoint PPT Presentation

Continuations, All The Way Down Tim Humphries Ambiata @thumphriees teh.id.au Hi! My names Tim. Im an engineer at Ambiata, here in Sydney. I gave my talk the wrong title. Im not going to spend much time talking about CPS, shift/reset,


slide-1
SLIDE 1

Continuations, All The Way Down

Tim Humphries Ambiata @thumphriees teh.id.au

Hi! My name’s Tim. I’m an engineer at Ambiata, here in Sydney. I gave my talk the wrong title. I’m not going to spend much time talking about CPS, shift/reset, or call/cc. I’m just going to be using functions to solve problems. I should have called it…

slide-2
SLIDE 2

Functions Continuations, All The Way Down

Tim Humphries Ambiata @thumphriees teh.id.au

We’re going to use functions to improve expressions. We’re going to make typed interfaces for these improvements.

slide-3
SLIDE 3

++

(append)

The first function I want to talk about is append. Append is my favourite function.

slide-4
SLIDE 4

[1..10] ++ ([11..20] ++ [21..30]) ([1..10] ++ [11..20]) ++ [21..30]

Here I have two expressions that use append. Are these expressions the same? If we ask GHC the wrong question, it might say yes. These expressions evaluate to the same value, but they get there very differently. It’s easier to understand why when we draw them as trees. In these trees, the leaves are literal values, the branches are functions, and the edges are function application. We see that xs is a tree that leans to the right. It’s right-associated. We see that ys, which is left-associated, is a tree that leans to the left. This changes the way they evaluate.

slide-5
SLIDE 5

[1..10] ++ ([11..20] ++ [21..30]) ([1..10] ++ [11..20]) ++ [21..30] let xs = ys = in xs == ys

Here I have two expressions that use append. Are these expressions the same? If we ask GHC the wrong question, it might say yes. These expressions evaluate to the same value, but they get there very differently. It’s easier to understand why when we draw them as trees. In these trees, the leaves are literal values, the branches are functions, and the edges are function application. We see that xs is a tree that leans to the right. It’s right-associated. We see that ys, which is left-associated, is a tree that leans to the left. This changes the way they evaluate.

slide-6
SLIDE 6

[1..10] ++ ([11..20] ++ [21..30]) ([1..10] ++ [11..20]) ++ [21..30] let xs = ys = in xs == ys

  • - True

Here I have two expressions that use append. Are these expressions the same? If we ask GHC the wrong question, it might say yes. These expressions evaluate to the same value, but they get there very differently. It’s easier to understand why when we draw them as trees. In these trees, the leaves are literal values, the branches are functions, and the edges are function application. We see that xs is a tree that leans to the right. It’s right-associated. We see that ys, which is left-associated, is a tree that leans to the left. This changes the way they evaluate.

slide-7
SLIDE 7

[1..10] ++ ([11..20] ++ [21..30]) ([1..10] ++ [11..20]) ++ [21..30] let xs = ys = in xs == ys

  • - True

Here I have two expressions that use append. Are these expressions the same? If we ask GHC the wrong question, it might say yes. These expressions evaluate to the same value, but they get there very differently. It’s easier to understand why when we draw them as trees. In these trees, the leaves are literal values, the branches are functions, and the edges are function application. We see that xs is a tree that leans to the right. It’s right-associated. We see that ys, which is left-associated, is a tree that leans to the left. This changes the way they evaluate.

slide-8
SLIDE 8

[1..10] ++ ([11..20] ++ [21..30]) ([1..10] ++ [11..20]) ++ [21..30] let xs = ys = in xs == ys

  • - True

Here I have two expressions that use append. Are these expressions the same? If we ask GHC the wrong question, it might say yes. These expressions evaluate to the same value, but they get there very differently. It’s easier to understand why when we draw them as trees. In these trees, the leaves are literal values, the branches are functions, and the edges are function application. We see that xs is a tree that leans to the right. It’s right-associated. We see that ys, which is left-associated, is a tree that leans to the left. This changes the way they evaluate.

slide-9
SLIDE 9

Both of these expressions have a spine comprised of two appends. If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same. With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

slide-10
SLIDE 10

ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr

Both of these expressions have a spine comprised of two appends. If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same. With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

slide-11
SLIDE 11

ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr

Both of these expressions have a spine comprised of two appends. If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same. With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

slide-12
SLIDE 12

ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr

Both of these expressions have a spine comprised of two appends. If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same. With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

slide-13
SLIDE 13

ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr

Both of these expressions have a spine comprised of two appends. If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same. With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

slide-14
SLIDE 14

ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr

Both of these expressions have a spine comprised of two appends. If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same. With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

slide-15
SLIDE 15

ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1:

Both of these expressions have a spine comprised of two appends. If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same. With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

slide-16
SLIDE 16

ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1: 2:

Both of these expressions have a spine comprised of two appends. If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same. With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

slide-17
SLIDE 17

ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1: 2:

Both of these expressions have a spine comprised of two appends. If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same. With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

slide-18
SLIDE 18

ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1: 2: 3:…

Both of these expressions have a spine comprised of two appends. If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same. With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

slide-19
SLIDE 19

ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1: 2: 3:…

Both of these expressions have a spine comprised of two appends. If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same. With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

slide-20
SLIDE 20

ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1: 2: 3:…

Both of these expressions have a spine comprised of two appends. If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same. With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

slide-21
SLIDE 21

ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1: 2: 3:…

Both of these expressions have a spine comprised of two appends. If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same. With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

slide-22
SLIDE 22

ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1: 2: 3:…

Both of these expressions have a spine comprised of two appends. If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same. With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

slide-23
SLIDE 23

ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1: 2: 3:… 1:

Both of these expressions have a spine comprised of two appends. If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same. With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

slide-24
SLIDE 24

ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1: 2: 3:… 1:

Both of these expressions have a spine comprised of two appends. If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same. With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

slide-25
SLIDE 25

ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1: 2: 3:… 1:

Both of these expressions have a spine comprised of two appends. If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same. With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

slide-26
SLIDE 26

ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1: 2: 3:… 1: 2:

Both of these expressions have a spine comprised of two appends. If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same. With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

slide-27
SLIDE 27

ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1: 2: 3:… 1: 2:

Both of these expressions have a spine comprised of two appends. If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same. With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

slide-28
SLIDE 28

ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1: 2: 3:… 1: 2:

Both of these expressions have a spine comprised of two appends. If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same. With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

slide-29
SLIDE 29

ll ++ rr = case ll of (a:bc) -> a : bc ++ rr [] -> rr 1: 2: 3:… 1: 2: 3:…

Both of these expressions have a spine comprised of two appends. If we take a look at the implementation of append, we see it never touches the right branch. It walks down the left branch until it hits an element. It returns that element in a cons. The rest of the list is a recursive call to append. i.e. until the list on the left is eliminated, the tree’s spine stays the same. With this in mind, let’s see how they evaluate. XS, on the left, can pull an element out in constant time. YS, on the right, takes an extra step.

slide-30
SLIDE 30

xs ++ ys = case xs of (x:xx) -> x : xx ++ ys [] -> ys

This isn’t much of a problem for our little expression. However, for deeper append trees, it could lead to substantial overhead. Here we have two larger append trees, associated right and associated left. They have the same spine, three appends. The RHS is significantly worse. Almost twice as many operations.

slide-31
SLIDE 31

xs ++ ys = case xs of (x:xx) -> x : xx ++ ys [] -> ys

This isn’t much of a problem for our little expression. However, for deeper append trees, it could lead to substantial overhead. Here we have two larger append trees, associated right and associated left. They have the same spine, three appends. The RHS is significantly worse. Almost twice as many operations.

slide-32
SLIDE 32

xs ++ ys = case xs of (x:xx) -> x : xx ++ ys [] -> ys

This isn’t much of a problem for our little expression. However, for deeper append trees, it could lead to substantial overhead. Here we have two larger append trees, associated right and associated left. They have the same spine, three appends. The RHS is significantly worse. Almost twice as many operations.

slide-33
SLIDE 33

xs ++ ys = case xs of (x:xx) -> x : xx ++ ys [] -> ys

worse

(additional 30-odd steps)

This isn’t much of a problem for our little expression. However, for deeper append trees, it could lead to substantial overhead. Here we have two larger append trees, associated right and associated left. They have the same spine, three appends. The RHS is significantly worse. Almost twice as many operations.

slide-34
SLIDE 34

doit_rec :: Int -> Writer [String] () doit_rec 0 = pure () doit_rec x = do doit_rec (x-1) -- left-associated bind! tell ["Message " ++ show x]

A bad, left-associated append might be created without our knowledge. A beginner will encounter this when using Writer. Writer expects a Monoid, and Lists are the most popular Monoid. Behind every bind is an append. So, the associativity of our monadic code suddenly matters. This is a particularly bad expression. It builds a deeply-left associated bind, then calls tell. This leads to a left-biased tree, where the spine is all binds. When we run the writer, we get a left-biased append, with the same structure.

slide-35
SLIDE 35

doit_rec :: Int -> Writer [String] () doit_rec 0 = pure () doit_rec x = do doit_rec (x-1) -- left-associated bind! tell ["Message " ++ show x]

A bad, left-associated append might be created without our knowledge. A beginner will encounter this when using Writer. Writer expects a Monoid, and Lists are the most popular Monoid. Behind every bind is an append. So, the associativity of our monadic code suddenly matters. This is a particularly bad expression. It builds a deeply-left associated bind, then calls tell. This leads to a left-biased tree, where the spine is all binds. When we run the writer, we get a left-biased append, with the same structure.

slide-36
SLIDE 36

doit_rec :: Int -> Writer [String] () doit_rec 0 = pure () doit_rec x = do doit_rec (x-1) -- left-associated bind! tell ["Message " ++ show x]

snd . runWriter

A bad, left-associated append might be created without our knowledge. A beginner will encounter this when using Writer. Writer expects a Monoid, and Lists are the most popular Monoid. Behind every bind is an append. So, the associativity of our monadic code suddenly matters. This is a particularly bad expression. It builds a deeply-left associated bind, then calls tell. This leads to a left-biased tree, where the spine is all binds. When we run the writer, we get a left-biased append, with the same structure.

slide-37
SLIDE 37

doit_rec :: Int -> Writer [String] () doit_rec 0 = pure () doit_rec x = do doit_rec (x-1) -- left-associated bind! tell ["Message " ++ show x]

snd . runWriter

A bad, left-associated append might be created without our knowledge. A beginner will encounter this when using Writer. Writer expects a Monoid, and Lists are the most popular Monoid. Behind every bind is an append. So, the associativity of our monadic code suddenly matters. This is a particularly bad expression. It builds a deeply-left associated bind, then calls tell. This leads to a left-biased tree, where the spine is all binds. When we run the writer, we get a left-biased append, with the same structure.

slide-38
SLIDE 38

doit_rec :: Int -> Writer [String] () doit_rec 0 = pure () doit_rec x = do doit_rec (x-1) -- left-associated bind! tell ["Message " ++ show x]

awful

When we run this as a benchmark, we see performance is pathological.

slide-39
SLIDE 39

.

(compose)

I’m going to solve this problem using the greatest function of all, compose.

slide-40
SLIDE 40

ghci> :t ([1..10] ++)

What I want to do is suspend an append. If we look at the type of a suspended append, we see it’s a function, expecting another list. We can finalise this suspended append by applying it to the empty list. We can compose two suspended appends together. We can finalise this pipeline the same way. Note how the appends have composed together.

slide-41
SLIDE 41

ghci> :t ([1..10] ++)

suspended append

What I want to do is suspend an append. If we look at the type of a suspended append, we see it’s a function, expecting another list. We can finalise this suspended append by applying it to the empty list. We can compose two suspended appends together. We can finalise this pipeline the same way. Note how the appends have composed together.

slide-42
SLIDE 42

ghci> :t ([1..10] ++) [Int] -> [Int]

suspended append

What I want to do is suspend an append. If we look at the type of a suspended append, we see it’s a function, expecting another list. We can finalise this suspended append by applying it to the empty list. We can compose two suspended appends together. We can finalise this pipeline the same way. Note how the appends have composed together.

slide-43
SLIDE 43

ghci> :t ([1..10] ++) [Int] -> [Int] ghci> ([1..10] ++) $ []

suspended append

What I want to do is suspend an append. If we look at the type of a suspended append, we see it’s a function, expecting another list. We can finalise this suspended append by applying it to the empty list. We can compose two suspended appends together. We can finalise this pipeline the same way. Note how the appends have composed together.

slide-44
SLIDE 44

ghci> :t ([1..10] ++) [Int] -> [Int] ghci> ([1..10] ++) $ [] [1,2,3,4,5,6,7,8,9,10]

suspended append

What I want to do is suspend an append. If we look at the type of a suspended append, we see it’s a function, expecting another list. We can finalise this suspended append by applying it to the empty list. We can compose two suspended appends together. We can finalise this pipeline the same way. Note how the appends have composed together.

slide-45
SLIDE 45

ghci> :t ([1..10] ++) [Int] -> [Int] ghci> ([1..10] ++) $ [] [1,2,3,4,5,6,7,8,9,10] ghci> :t ([1..10] ++) . ([11..20] ++)

suspended append

What I want to do is suspend an append. If we look at the type of a suspended append, we see it’s a function, expecting another list. We can finalise this suspended append by applying it to the empty list. We can compose two suspended appends together. We can finalise this pipeline the same way. Note how the appends have composed together.

slide-46
SLIDE 46

ghci> :t ([1..10] ++) [Int] -> [Int] ghci> ([1..10] ++) $ [] [1,2,3,4,5,6,7,8,9,10] ghci> :t ([1..10] ++) . ([11..20] ++) [Int] -> [Int]

suspended append

What I want to do is suspend an append. If we look at the type of a suspended append, we see it’s a function, expecting another list. We can finalise this suspended append by applying it to the empty list. We can compose two suspended appends together. We can finalise this pipeline the same way. Note how the appends have composed together.

slide-47
SLIDE 47

ghci> :t ([1..10] ++) [Int] -> [Int] ghci> ([1..10] ++) $ [] [1,2,3,4,5,6,7,8,9,10] ghci> :t ([1..10] ++) . ([11..20] ++) [Int] -> [Int] ghci> ([1..10] ++) . ([11..20] ++) $ []

suspended append

What I want to do is suspend an append. If we look at the type of a suspended append, we see it’s a function, expecting another list. We can finalise this suspended append by applying it to the empty list. We can compose two suspended appends together. We can finalise this pipeline the same way. Note how the appends have composed together.

slide-48
SLIDE 48

ghci> :t ([1..10] ++) [Int] -> [Int] ghci> ([1..10] ++) $ [] [1,2,3,4,5,6,7,8,9,10] ghci> :t ([1..10] ++) . ([11..20] ++) [Int] -> [Int] [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17 ,18,19,20] ghci> ([1..10] ++) . ([11..20] ++) $ []

suspended append

What I want to do is suspend an append. If we look at the type of a suspended append, we see it’s a function, expecting another list. We can finalise this suspended append by applying it to the empty list. We can compose two suspended appends together. We can finalise this pipeline the same way. Note how the appends have composed together.

slide-49
SLIDE 49

([1..10] ++) . ([11..20] ++) $ []

Claim: composed pipelines of appends are always right-associated. Tree got bigger. Definition of function composition: right-associated function application. If we expand through the tree, we see it works for this simple example. We eliminate the compose operator fairly quickly, and are then left with the optimal append chain. You really need to try this trick by hand to get a feel for it, so I won’t linger.

slide-50
SLIDE 50

([1..10] ++) . ([11..20] ++) $ []

Claim: composed pipelines of appends are always right-associated. Tree got bigger. Definition of function composition: right-associated function application. If we expand through the tree, we see it works for this simple example. We eliminate the compose operator fairly quickly, and are then left with the optimal append chain. You really need to try this trick by hand to get a feel for it, so I won’t linger.

slide-51
SLIDE 51

([1..10] ++) . ([11..20] ++) $ [] f . g = \x -> f (g x) f $ x = f x

Claim: composed pipelines of appends are always right-associated. Tree got bigger. Definition of function composition: right-associated function application. If we expand through the tree, we see it works for this simple example. We eliminate the compose operator fairly quickly, and are then left with the optimal append chain. You really need to try this trick by hand to get a feel for it, so I won’t linger.

slide-52
SLIDE 52

([1..10] ++) . ([11..20] ++) $ [] f . g = \x -> f (g x) f $ x = f x

Claim: composed pipelines of appends are always right-associated. Tree got bigger. Definition of function composition: right-associated function application. If we expand through the tree, we see it works for this simple example. We eliminate the compose operator fairly quickly, and are then left with the optimal append chain. You really need to try this trick by hand to get a feel for it, so I won’t linger.

slide-53
SLIDE 53

([1..10] ++) . ([11..20] ++) $ [] f . g = \x -> f (g x) f $ x = f x

Claim: composed pipelines of appends are always right-associated. Tree got bigger. Definition of function composition: right-associated function application. If we expand through the tree, we see it works for this simple example. We eliminate the compose operator fairly quickly, and are then left with the optimal append chain. You really need to try this trick by hand to get a feel for it, so I won’t linger.

slide-54
SLIDE 54

([1..10] ++) . ([11..20] ++) $ [] f . g = \x -> f (g x) f $ x = f x

Claim: composed pipelines of appends are always right-associated. Tree got bigger. Definition of function composition: right-associated function application. If we expand through the tree, we see it works for this simple example. We eliminate the compose operator fairly quickly, and are then left with the optimal append chain. You really need to try this trick by hand to get a feel for it, so I won’t linger.

slide-55
SLIDE 55

(([1..10] ++) . ([11..20] ++)) . ([21..30] ++) $ []

f . g = \x -> f (g x) f $ x = f x

It still works when we create a left-associated append chain. Compose will reorient everything into the right-biased tree. The tree is even bigger.

slide-56
SLIDE 56

(([1..10] ++) . ([11..20] ++)) . ([21..30] ++) $ []

f . g = \x -> f (g x) f $ x = f x

It still works when we create a left-associated append chain. Compose will reorient everything into the right-biased tree. The tree is even bigger.

slide-57
SLIDE 57

(([1..10] ++) . ([11..20] ++)) . ([21..30] ++) $ []

f . g = \x -> f (g x) f $ x = f x

substitute

We substitute in for compose.

slide-58
SLIDE 58

(([1..10] ++) . ([11..20] ++)) . ([21..30] ++) $ []

f . g = \x -> f (g x) f $ x = f x

We proceed as we did before; we now have a right-biased append chain expecting a final function. We compose again.

slide-59
SLIDE 59

(([1..10] ++) . ([11..20] ++)) . ([21..30] ++) $ []

f . g = \x -> f (g x) f $ x = f x

Things have propagated in a way that you may have found surprising. Again, this works because of the way compose is defined. Try it at home.

slide-60
SLIDE 60

(([1..10] ++) . ([11..20] ++)) . ([21..30] ++) $ []

f . g = \x -> f (g x) f $ x = f x

We get the good tree at the end. Success!

slide-61
SLIDE 61

Monoid w => Monad (Writer w a)

We can’t use functions in a Writer, so we haven’t really fixed our problem yet. We need a type.

slide-62
SLIDE 62

newtype AppendK a = AppendK { unAppendK :: [a] -> [a] }

My new type is called AppendK. It’s just a newtype around suspended append. We hoist a list into AppendK by suspending an append. Partial application. We finalise our pipeline by applying it to the empty list.

slide-63
SLIDE 63

newtype AppendK a = AppendK { unAppendK :: [a] -> [a] } fromList :: [a] -> AppendK a

My new type is called AppendK. It’s just a newtype around suspended append. We hoist a list into AppendK by suspending an append. Partial application. We finalise our pipeline by applying it to the empty list.

slide-64
SLIDE 64

newtype AppendK a = AppendK { unAppendK :: [a] -> [a] } fromList :: [a] -> AppendK a fromList xs =

My new type is called AppendK. It’s just a newtype around suspended append. We hoist a list into AppendK by suspending an append. Partial application. We finalise our pipeline by applying it to the empty list.

slide-65
SLIDE 65

newtype AppendK a = AppendK { unAppendK :: [a] -> [a] } fromList :: [a] -> AppendK a fromList xs = AppendK (xs ++)

My new type is called AppendK. It’s just a newtype around suspended append. We hoist a list into AppendK by suspending an append. Partial application. We finalise our pipeline by applying it to the empty list.

slide-66
SLIDE 66

newtype AppendK a = AppendK { unAppendK :: [a] -> [a] } fromList :: [a] -> AppendK a fromList xs = AppendK (xs ++)

suspended append

My new type is called AppendK. It’s just a newtype around suspended append. We hoist a list into AppendK by suspending an append. Partial application. We finalise our pipeline by applying it to the empty list.

slide-67
SLIDE 67

newtype AppendK a = AppendK { unAppendK :: [a] -> [a] } fromList :: [a] -> AppendK a fromList xs = AppendK (xs ++) toList :: AppendK a -> [a]

suspended append

My new type is called AppendK. It’s just a newtype around suspended append. We hoist a list into AppendK by suspending an append. Partial application. We finalise our pipeline by applying it to the empty list.

slide-68
SLIDE 68

newtype AppendK a = AppendK { unAppendK :: [a] -> [a] } fromList :: [a] -> AppendK a fromList xs = AppendK (xs ++) toList :: AppendK a -> [a] toList (AppendK k) =

suspended append

My new type is called AppendK. It’s just a newtype around suspended append. We hoist a list into AppendK by suspending an append. Partial application. We finalise our pipeline by applying it to the empty list.

slide-69
SLIDE 69

newtype AppendK a = AppendK { unAppendK :: [a] -> [a] } fromList :: [a] -> AppendK a fromList xs = AppendK (xs ++) toList :: AppendK a -> [a] toList (AppendK k) = k []

suspended append

My new type is called AppendK. It’s just a newtype around suspended append. We hoist a list into AppendK by suspending an append. Partial application. We finalise our pipeline by applying it to the empty list.

slide-70
SLIDE 70

newtype AppendK a = AppendK { unAppendK :: [a] -> [a] } fromList :: [a] -> AppendK a fromList xs = AppendK (xs ++) toList :: AppendK a -> [a] toList (AppendK k) = k []

suspended append

My new type is called AppendK. It’s just a newtype around suspended append. We hoist a list into AppendK by suspending an append. Partial application. We finalise our pipeline by applying it to the empty list.

slide-71
SLIDE 71

newtype AppendK a = AppendK { unAppendK :: [a] -> [a] }

Lastly, we need to write an append function. As before, we find composing these functions gives us a valid append operation. This means we get constant-time append. We can then trivially write a monoid instance. We can then use it to solve our problem. We see our microbenchmark has been fixed.

slide-72
SLIDE 72

newtype AppendK a = AppendK { unAppendK :: [a] -> [a] } append (AppendK xs) (AppendK ys) = AppendK (xs . ys)

Lastly, we need to write an append function. As before, we find composing these functions gives us a valid append operation. This means we get constant-time append. We can then trivially write a monoid instance. We can then use it to solve our problem. We see our microbenchmark has been fixed.

slide-73
SLIDE 73

newtype AppendK a = AppendK { unAppendK :: [a] -> [a] } append (AppendK xs) (AppendK ys) = AppendK (xs . ys) instance Monoid (AppendK a) where mappend = append mempty = AppendK id

Lastly, we need to write an append function. As before, we find composing these functions gives us a valid append operation. This means we get constant-time append. We can then trivially write a monoid instance. We can then use it to solve our problem. We see our microbenchmark has been fixed.

slide-74
SLIDE 74

newtype AppendK a = AppendK { unAppendK :: [a] -> [a] } append (AppendK xs) (AppendK ys) = AppendK (xs . ys) instance Monoid (AppendK a) where mappend = append mempty = AppendK id doit_rec :: Int -> AppendK String ()

Lastly, we need to write an append function. As before, we find composing these functions gives us a valid append operation. This means we get constant-time append. We can then trivially write a monoid instance. We can then use it to solve our problem. We see our microbenchmark has been fixed.

slide-75
SLIDE 75

newtype AppendK a = AppendK { unAppendK :: [a] -> [a] } append (AppendK xs) (AppendK ys) = AppendK (xs . ys) instance Monoid (AppendK a) where mappend = append mempty = AppendK id doit_rec :: Int -> AppendK String ()

~linear time

Lastly, we need to write an append function. As before, we find composing these functions gives us a valid append operation. This means we get constant-time append. We can then trivially write a monoid instance. We can then use it to solve our problem. We see our microbenchmark has been fixed.

slide-76
SLIDE 76

newtype AppendK m a = AppendK { unAppendK :: m a -> m a } fromList :: Monoid m => m a -> AppendK m a fromList xs = AppendK (xs <>) toList :: Monoid m => AppendK m a -> m a toList (AppendK k) = k mempty

(Benchmark!)

The only list functions we were using were append and mempty. We can generalise AppendK for any Monoid. It might not be an optimisation for all Monoids! Benchmark first.

slide-77
SLIDE 77

<$>

(fmap)

Time to talk about my second-favourite function, fmap.

slide-78
SLIDE 78

fmap isEven . (fmap (+1) [1..100]) fmap (isEven . (+1)) [1..100])

Are these two expressions the same? Again, same trick. If we ask GHC the wrong question, it will say yes. We also know they’re equal because we know about the Functor laws. fmap fusion is supposed to be a valid transformation. Let’s draw the trees. Observe that the first tree has two fmap nodes. The second tree has just one. It’s been fused. This changes the way they evaluate.

slide-79
SLIDE 79

fmap isEven . (fmap (+1) [1..100]) fmap (isEven . (+1)) [1..100]) let xs = ys = in xs == ys

Are these two expressions the same? Again, same trick. If we ask GHC the wrong question, it will say yes. We also know they’re equal because we know about the Functor laws. fmap fusion is supposed to be a valid transformation. Let’s draw the trees. Observe that the first tree has two fmap nodes. The second tree has just one. It’s been fused. This changes the way they evaluate.

slide-80
SLIDE 80

fmap isEven . (fmap (+1) [1..100]) fmap (isEven . (+1)) [1..100]) let xs = ys = in xs == ys

  • - True

Are these two expressions the same? Again, same trick. If we ask GHC the wrong question, it will say yes. We also know they’re equal because we know about the Functor laws. fmap fusion is supposed to be a valid transformation. Let’s draw the trees. Observe that the first tree has two fmap nodes. The second tree has just one. It’s been fused. This changes the way they evaluate.

slide-81
SLIDE 81

fmap isEven . (fmap (+1) [1..100]) fmap (isEven . (+1)) [1..100]) let xs = ys = in xs == ys

  • - True

fmap f . fmap g == fmap (f . g)

Are these two expressions the same? Again, same trick. If we ask GHC the wrong question, it will say yes. We also know they’re equal because we know about the Functor laws. fmap fusion is supposed to be a valid transformation. Let’s draw the trees. Observe that the first tree has two fmap nodes. The second tree has just one. It’s been fused. This changes the way they evaluate.

slide-82
SLIDE 82

fmap isEven . (fmap (+1) [1..100]) fmap (isEven . (+1)) [1..100]) let xs = ys = in xs == ys

  • - True

fmap f . fmap g == fmap (f . g)

Are these two expressions the same? Again, same trick. If we ask GHC the wrong question, it will say yes. We also know they’re equal because we know about the Functor laws. fmap fusion is supposed to be a valid transformation. Let’s draw the trees. Observe that the first tree has two fmap nodes. The second tree has just one. It’s been fused. This changes the way they evaluate.

slide-83
SLIDE 83

fmap f . fmap g == fmap (f . g)

Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine, made out of fmaps, does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

slide-84
SLIDE 84

instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g)

Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine, made out of fmaps, does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

slide-85
SLIDE 85

instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g)

Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine, made out of fmaps, does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

slide-86
SLIDE 86

instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g)

tree shape never changes (fmap rebuilds it)

Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine, made out of fmaps, does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

slide-87
SLIDE 87

instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g)

Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine, made out of fmaps, does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

slide-88
SLIDE 88

instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g)

Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine, made out of fmaps, does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

slide-89
SLIDE 89

instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g) isEven (1+1):

Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine, made out of fmaps, does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

slide-90
SLIDE 90

instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g) isEven (1+1):

Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine, made out of fmaps, does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

slide-91
SLIDE 91

instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g) isEven (1+1):

Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine, made out of fmaps, does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

slide-92
SLIDE 92

instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g) isEven (1+1): isEven (2+1):

Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine, made out of fmaps, does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

slide-93
SLIDE 93

instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g) isEven (1+1): isEven (2+1):

Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine, made out of fmaps, does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

slide-94
SLIDE 94

instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g) isEven (1+1): isEven (2+1):

Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine, made out of fmaps, does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

slide-95
SLIDE 95

instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g) isEven (1+1): isEven (2+1): isEven (3+1):

Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine, made out of fmaps, does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

slide-96
SLIDE 96

instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g)

Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine, made out of fmaps, does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

slide-97
SLIDE 97

instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g)

Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine, made out of fmaps, does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

slide-98
SLIDE 98

instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g)

Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine, made out of fmaps, does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

slide-99
SLIDE 99

instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g) isEven (1+1):

Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine, made out of fmaps, does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

slide-100
SLIDE 100

instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g) isEven (1+1):

Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine, made out of fmaps, does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

slide-101
SLIDE 101

instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g) isEven (1+1): isEven (2+1):

Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine, made out of fmaps, does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

slide-102
SLIDE 102

instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g) isEven (1+1): isEven (2+1):

Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine, made out of fmaps, does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

slide-103
SLIDE 103

instance Functor [] where fmap _ [] = [] fmap f (x:xs) = f x : fmap f xs fmap f . fmap g == fmap (f . g) isEven (1+1): isEven (2+1): isEven (3+1):

Let’s look at the code for fmap of list. Observe, like append, that it recursively calls itself until the list is exhausted. This means that the spine, made out of fmaps, does not change until the list is exhausted. If we step through the left tree, we see it has to traverse every fmap node to produce a single value. The right tree can produce new values in constant time. It probably also saves some heap space.

slide-104
SLIDE 104

fmap f . fmap g == fmap (f . g)

You might think, why would I care about the number of fmap nodes. You might not, but I do, because I have this really stupid function I want to run. It creates expressions with a huge number of fmaps. This forms a right-degenerate tree, and we need to walk through it pretty aggressively to produce each element.

slide-105
SLIDE 105

fmap f . fmap g == fmap (f . g) list_add_tail_rec :: Int -> [Int] -> [Int] list_add_tail_rec 0 xs = xs list_add_tail_rec x xs = list_add_tail_rec (x-1) (fmap (+1) xs)

You might think, why would I care about the number of fmap nodes. You might not, but I do, because I have this really stupid function I want to run. It creates expressions with a huge number of fmaps. This forms a right-degenerate tree, and we need to walk through it pretty aggressively to produce each element.

slide-106
SLIDE 106

fmap f . fmap g == fmap (f . g) list_add_tail_rec :: Int -> [Int] -> [Int] list_add_tail_rec 0 xs = xs list_add_tail_rec x xs = list_add_tail_rec (x-1) (fmap (+1) xs)

You might think, why would I care about the number of fmap nodes. You might not, but I do, because I have this really stupid function I want to run. It creates expressions with a huge number of fmaps. This forms a right-degenerate tree, and we need to walk through it pretty aggressively to produce each element.

slide-107
SLIDE 107

ghci> :t fmap

Claim: we can solve this with function composition. We can make fmap fusion mandatory. This is the type of fmap. We want to suspend the initial value, rather than the map. We want to get the mapping function later. So, let’s flip fmap. I’d like to compose together a single mapping function, before throwing it into an fmap. I’m going to suspend an fmap, but partially applied and flipped. My new type is called MapK. Don’t worry too much about the details, just observe it looks a lot like flip fmap. To lift into MapK, we write a function that grabs the future mapping function (k), and uses fmap to apply it to our initial value. This is the one and only fmap.

slide-108
SLIDE 108

ghci> :t fmap Functor f => (a -> b) -> f a -> f b

Claim: we can solve this with function composition. We can make fmap fusion mandatory. This is the type of fmap. We want to suspend the initial value, rather than the map. We want to get the mapping function later. So, let’s flip fmap. I’d like to compose together a single mapping function, before throwing it into an fmap. I’m going to suspend an fmap, but partially applied and flipped. My new type is called MapK. Don’t worry too much about the details, just observe it looks a lot like flip fmap. To lift into MapK, we write a function that grabs the future mapping function (k), and uses fmap to apply it to our initial value. This is the one and only fmap.

slide-109
SLIDE 109

ghci> :t fmap Functor f => (a -> b) -> f a -> f b ghci> :t flip fmap

Claim: we can solve this with function composition. We can make fmap fusion mandatory. This is the type of fmap. We want to suspend the initial value, rather than the map. We want to get the mapping function later. So, let’s flip fmap. I’d like to compose together a single mapping function, before throwing it into an fmap. I’m going to suspend an fmap, but partially applied and flipped. My new type is called MapK. Don’t worry too much about the details, just observe it looks a lot like flip fmap. To lift into MapK, we write a function that grabs the future mapping function (k), and uses fmap to apply it to our initial value. This is the one and only fmap.

slide-110
SLIDE 110

ghci> :t fmap Functor f => (a -> b) -> f a -> f b ghci> :t flip fmap Functor f => f a -> (a -> b) -> f b

Claim: we can solve this with function composition. We can make fmap fusion mandatory. This is the type of fmap. We want to suspend the initial value, rather than the map. We want to get the mapping function later. So, let’s flip fmap. I’d like to compose together a single mapping function, before throwing it into an fmap. I’m going to suspend an fmap, but partially applied and flipped. My new type is called MapK. Don’t worry too much about the details, just observe it looks a lot like flip fmap. To lift into MapK, we write a function that grabs the future mapping function (k), and uses fmap to apply it to our initial value. This is the one and only fmap.

slide-111
SLIDE 111

ghci> :t fmap Functor f => (a -> b) -> f a -> f b newtype MapK f a = MapK { unMapK :: forall b. (a -> b) -> f b } ghci> :t flip fmap Functor f => f a -> (a -> b) -> f b

Claim: we can solve this with function composition. We can make fmap fusion mandatory. This is the type of fmap. We want to suspend the initial value, rather than the map. We want to get the mapping function later. So, let’s flip fmap. I’d like to compose together a single mapping function, before throwing it into an fmap. I’m going to suspend an fmap, but partially applied and flipped. My new type is called MapK. Don’t worry too much about the details, just observe it looks a lot like flip fmap. To lift into MapK, we write a function that grabs the future mapping function (k), and uses fmap to apply it to our initial value. This is the one and only fmap.

slide-112
SLIDE 112

ghci> :t fmap Functor f => (a -> b) -> f a -> f b newtype MapK f a = MapK { unMapK :: forall b. (a -> b) -> f b } fromFunctor :: Functor f => f a -> MapK f a fromFunctor f = MapK (\k -> fmap k f) ghci> :t flip fmap Functor f => f a -> (a -> b) -> f b

Claim: we can solve this with function composition. We can make fmap fusion mandatory. This is the type of fmap. We want to suspend the initial value, rather than the map. We want to get the mapping function later. So, let’s flip fmap. I’d like to compose together a single mapping function, before throwing it into an fmap. I’m going to suspend an fmap, but partially applied and flipped. My new type is called MapK. Don’t worry too much about the details, just observe it looks a lot like flip fmap. To lift into MapK, we write a function that grabs the future mapping function (k), and uses fmap to apply it to our initial value. This is the one and only fmap.

slide-113
SLIDE 113

ghci> :t fmap Functor f => (a -> b) -> f a -> f b newtype MapK f a = MapK { unMapK :: forall b. (a -> b) -> f b } fromFunctor :: Functor f => f a -> MapK f a fromFunctor f = MapK (\k -> fmap k f) ghci> :t flip fmap Functor f => f a -> (a -> b) -> f b

(the _only_ fmap)

suspended fmap

Claim: we can solve this with function composition. We can make fmap fusion mandatory. This is the type of fmap. We want to suspend the initial value, rather than the map. We want to get the mapping function later. So, let’s flip fmap. I’d like to compose together a single mapping function, before throwing it into an fmap. I’m going to suspend an fmap, but partially applied and flipped. My new type is called MapK. Don’t worry too much about the details, just observe it looks a lot like flip fmap. To lift into MapK, we write a function that grabs the future mapping function (k), and uses fmap to apply it to our initial value. This is the one and only fmap.

slide-114
SLIDE 114

To add another map to the chain, we just grab the existing mapping function (k) and compose. ‘g’ here is our outer fmap. We can now write a Functor instance using nothing but our MapK function. Look, no constraints! This is slightly more general than a Functor.

slide-115
SLIDE 115

mapK :: (a -> b) -> MapK f a -> MapK f b mapK f (MapK g) = MapK (\k -> g (k . f))

To add another map to the chain, we just grab the existing mapping function (k) and compose. ‘g’ here is our outer fmap. We can now write a Functor instance using nothing but our MapK function. Look, no constraints! This is slightly more general than a Functor.

slide-116
SLIDE 116

mapK :: (a -> b) -> MapK f a -> MapK f b mapK f (MapK g) = MapK (\k -> g (k . f)) instance Functor (MapK f) where fmap = mapK

To add another map to the chain, we just grab the existing mapping function (k) and compose. ‘g’ here is our outer fmap. We can now write a Functor instance using nothing but our MapK function. Look, no constraints! This is slightly more general than a Functor.

slide-117
SLIDE 117

mapK :: (a -> b) -> MapK f a -> MapK f b mapK f (MapK g) = MapK (\k -> g (k . f)) instance Functor (MapK f) where fmap = mapK unconstrained!

To add another map to the chain, we just grab the existing mapping function (k) and compose. ‘g’ here is our outer fmap. We can now write a Functor instance using nothing but our MapK function. Look, no constraints! This is slightly more general than a Functor.

slide-118
SLIDE 118

list_add :: Int -> [Int] -> [Int] list_add x xs = toFunctor (list_add' x (fromFunctor xs)) list_add' :: Int -> MapK [] Int -> MapK [] Int list_add' 0 xs = xs list_add' x xs = list_add' (x-1) (fmap (+1) xs)

Now we can rewrite our pathological benchmark to use MapK. We find we’ve eliminated a quadratic factor. (The functions are also quadratic)

slide-119
SLIDE 119

list_add :: Int -> [Int] -> [Int] list_add x xs = toFunctor (list_add' x (fromFunctor xs)) list_add' :: Int -> MapK [] Int -> MapK [] Int list_add' 0 xs = xs list_add' x xs = list_add' (x-1) (fmap (+1) xs)

Now we can rewrite our pathological benchmark to use MapK. We find we’ve eliminated a quadratic factor. (The functions are also quadratic)

slide-120
SLIDE 120

list_add :: Int -> [Int] -> [Int] list_add x xs = toFunctor (list_add' x (fromFunctor xs)) list_add' :: Int -> MapK [] Int -> MapK [] Int list_add' 0 xs = xs list_add' x xs = list_add' (x-1) (fmap (+1) xs)

fmaps +1 . +1 . +1 …

Now we can rewrite our pathological benchmark to use MapK. We find we’ve eliminated a quadratic factor. (The functions are also quadratic)

slide-121
SLIDE 121

>>=

(bind)

My next most favourite function, Bind, is up next.

slide-122
SLIDE 122

bind associativity matters (sometimes)

From the Writer example, we learned that bind associativity matters sometimes. There can be hidden appends underneath, or worse (recursive definition of bind). My writer example was fixed by reassociating the underlying append. However, we could also just rearrange the binds!

slide-123
SLIDE 123

bind associativity matters (sometimes)

awful

From the Writer example, we learned that bind associativity matters sometimes. There can be hidden appends underneath, or worse (recursive definition of bind). My writer example was fixed by reassociating the underlying append. However, we could also just rearrange the binds!

slide-124
SLIDE 124

bind associativity matters (sometimes)

From the Writer example, we learned that bind associativity matters sometimes. There can be hidden appends underneath, or worse (recursive definition of bind). My writer example was fixed by reassociating the underlying append. However, we could also just rearrange the binds!

slide-125
SLIDE 125

newtype BindK m a = BindK { }

Just like before, we’re going to suspend bind in a function by partially applying it. We can lower back into the target monad by directing that bind pipeline into a return. Note that this is the only return in our resulting expression. The expression will be in monadic normal form. I’ll flesh out the types now, but don’t stress. It’s just the right-hand side of a bind. Just like we had a Monoid for AppendK, and a Functor for MapK, we can build a Monad instance for BindK. Wrapping an interface in functions produces a thing that’s slightly more general than that interface. Don’t worry about the monad instance too much either. If you work it out in GHCi it’s quite intuitive, but less so on a slide. Basically, we have three bind chains, and we just plumb them together using function application. It looks a lot like inlined compose.

slide-126
SLIDE 126

fromMonad m = BindK (m >>=)

newtype BindK m a = BindK { }

suspended bind!

Just like before, we’re going to suspend bind in a function by partially applying it. We can lower back into the target monad by directing that bind pipeline into a return. Note that this is the only return in our resulting expression. The expression will be in monadic normal form. I’ll flesh out the types now, but don’t stress. It’s just the right-hand side of a bind. Just like we had a Monoid for AppendK, and a Functor for MapK, we can build a Monad instance for BindK. Wrapping an interface in functions produces a thing that’s slightly more general than that interface. Don’t worry about the monad instance too much either. If you work it out in GHCi it’s quite intuitive, but less so on a slide. Basically, we have three bind chains, and we just plumb them together using function application. It looks a lot like inlined compose.

slide-127
SLIDE 127

fromMonad m = BindK (m >>=)

newtype BindK m a = BindK { }

suspended bind!

toMonad (BindK k) = k return

_only_ return!

Just like before, we’re going to suspend bind in a function by partially applying it. We can lower back into the target monad by directing that bind pipeline into a return. Note that this is the only return in our resulting expression. The expression will be in monadic normal form. I’ll flesh out the types now, but don’t stress. It’s just the right-hand side of a bind. Just like we had a Monoid for AppendK, and a Functor for MapK, we can build a Monad instance for BindK. Wrapping an interface in functions produces a thing that’s slightly more general than that interface. Don’t worry about the monad instance too much either. If you work it out in GHCi it’s quite intuitive, but less so on a slide. Basically, we have three bind chains, and we just plumb them together using function application. It looks a lot like inlined compose.

slide-128
SLIDE 128

fromMonad m = BindK (m >>=)

newtype BindK m a = BindK { }

suspended bind!

toMonad (BindK k) = k return

_only_ return!

runBindK :: forall b. (a -> m b) -> m b

fromMonad :: Monad m => m a -> BindK m a

toMonad :: Monad m => BindK m a -> m a

Just like before, we’re going to suspend bind in a function by partially applying it. We can lower back into the target monad by directing that bind pipeline into a return. Note that this is the only return in our resulting expression. The expression will be in monadic normal form. I’ll flesh out the types now, but don’t stress. It’s just the right-hand side of a bind. Just like we had a Monoid for AppendK, and a Functor for MapK, we can build a Monad instance for BindK. Wrapping an interface in functions produces a thing that’s slightly more general than that interface. Don’t worry about the monad instance too much either. If you work it out in GHCi it’s quite intuitive, but less so on a slide. Basically, we have three bind chains, and we just plumb them together using function application. It looks a lot like inlined compose.

slide-129
SLIDE 129

fromMonad m = BindK (m >>=)

newtype BindK m a = BindK { }

suspended bind!

toMonad (BindK k) = k return

_only_ return!

runBindK :: forall b. (a -> m b) -> m b

fromMonad :: Monad m => m a -> BindK m a

toMonad :: Monad m => BindK m a -> m a

instance Monad (BindK m) where (BindK m) >>= k = BindK (\c -> m (\a -> runBindK (k a) c))

(just plumbing)

Just like before, we’re going to suspend bind in a function by partially applying it. We can lower back into the target monad by directing that bind pipeline into a return. Note that this is the only return in our resulting expression. The expression will be in monadic normal form. I’ll flesh out the types now, but don’t stress. It’s just the right-hand side of a bind. Just like we had a Monoid for AppendK, and a Functor for MapK, we can build a Monad instance for BindK. Wrapping an interface in functions produces a thing that’s slightly more general than that interface. Don’t worry about the monad instance too much either. If you work it out in GHCi it’s quite intuitive, but less so on a slide. Basically, we have three bind chains, and we just plumb them together using function application. It looks a lot like inlined compose.

slide-130
SLIDE 130

instance Monad (BindK m) where (BindK m) >>= k = BindK (\c -> m (\a -> runBindK (k a) c))

(just plumbing)

The Functor instance performs fmap fusion, just like MapK! There are no Functor or Monad constraints on BindK’s instances. It’s slightly more general than a Monad transformer.

slide-131
SLIDE 131

instance Monad (BindK m) where (BindK m) >>= k = BindK (\c -> m (\a -> runBindK (k a) c))

(just plumbing)

instance Functor (BindK f) where fmap f (BindK m) = BindK (\k -> m (k . f))

The Functor instance performs fmap fusion, just like MapK! There are no Functor or Monad constraints on BindK’s instances. It’s slightly more general than a Monad transformer.

slide-132
SLIDE 132

instance Monad (BindK m) where (BindK m) >>= k = BindK (\c -> m (\a -> runBindK (k a) c))

(just plumbing)

instance Functor (BindK f) where fmap f (BindK m) = BindK (\k -> m (k . f)) Same as MapK!

The Functor instance performs fmap fusion, just like MapK! There are no Functor or Monad constraints on BindK’s instances. It’s slightly more general than a Monad transformer.

slide-133
SLIDE 133

instance Monad (BindK m) where (BindK m) >>= k = BindK (\c -> m (\a -> runBindK (k a) c))

(just plumbing)

instance Functor (BindK f) where fmap f (BindK m) = BindK (\k -> m (k . f))

unconstrained!

Same as MapK!

The Functor instance performs fmap fusion, just like MapK! There are no Functor or Monad constraints on BindK’s instances. It’s slightly more general than a Monad transformer.

slide-134
SLIDE 134

doit_rec_bindk :: Int -> Writer [String] () doit_rec_bindk x = runBindK (fun x) where fun :: Int -> BindK (Writer [String]) () fun 0 = pure () fun y = do fun (y-1) lift (tell ["Message " ++ show x])

We can use BindK to fix both of our benchmarks at once. For the Writer benchmark, by reassociating the binds to the right, we’ve produced a right-associated list append under the hood. We solved it in a different way.

slide-135
SLIDE 135

doit_rec_bindk :: Int -> Writer [String] () doit_rec_bindk x = runBindK (fun x) where fun :: Int -> BindK (Writer [String]) () fun 0 = pure () fun y = do fun (y-1) lift (tell ["Message " ++ show x])

We can use BindK to fix both of our benchmarks at once. For the Writer benchmark, by reassociating the binds to the right, we’ve produced a right-associated list append under the hood. We solved it in a different way.

slide-136
SLIDE 136

list_add_bindk :: Int -> [Int] -> [Int] list_add_bindk x xs = toMonad (fun x (fromMonad xs)) where fun 0 xs = xs fun x xs = fun (x-1) (fmap (+1) xs)

For our fmap benchmark, all we do is raise into BindK and use its fmap. Same outcome!

slide-137
SLIDE 137

list_add_bindk :: Int -> [Int] -> [Int] list_add_bindk x xs = toMonad (fun x (fromMonad xs)) where fun 0 xs = xs fun x xs = fun (x-1) (fmap (+1) xs)

For our fmap benchmark, all we do is raise into BindK and use its fmap. Same outcome!

slide-138
SLIDE 138

Jargon

nothing but functions

The implementations of these concepts in Haskell are nothing but functions. The concepts themselves are much more complicated and worthy of study.

slide-139
SLIDE 139

AppendK == DList

Jargon

nothing but functions

The implementations of these concepts in Haskell are nothing but functions. The concepts themselves are much more complicated and worthy of study.

slide-140
SLIDE 140

AppendK == DList AppendK == Endo []

Jargon

nothing but functions

The implementations of these concepts in Haskell are nothing but functions. The concepts themselves are much more complicated and worthy of study.

slide-141
SLIDE 141

AppendK == DList AppendK == Endo []

Jargon

nothing but functions

The implementations of these concepts in Haskell are nothing but functions. The concepts themselves are much more complicated and worthy of study.

slide-142
SLIDE 142

AppendK == DList AppendK == Endo [] MapK == Yoneda

Jargon

nothing but functions

The implementations of these concepts in Haskell are nothing but functions. The concepts themselves are much more complicated and worthy of study.

slide-143
SLIDE 143

AppendK == DList AppendK == Endo [] MapK == Yoneda

Jargon

nothing but functions

The implementations of these concepts in Haskell are nothing but functions. The concepts themselves are much more complicated and worthy of study.

slide-144
SLIDE 144

AppendK == DList AppendK == Endo [] MapK == Yoneda BindK == Codensity

Jargon

nothing but functions

The implementations of these concepts in Haskell are nothing but functions. The concepts themselves are much more complicated and worthy of study.

slide-145
SLIDE 145

Improve

Codensity was popularised by Janis Voigtlander’s paper about improving free monads. His improve function would select Codensity (Free f) instead of Free f. It’s since been further improved upon in Free by Ed Kmett’s Church-encoding. More functions!

slide-146
SLIDE 146
  • re-encoded interfaces with functions
  • “CPS embedding” of Monoid, Functor, Monad
  • right-associated our appends and binds
  • useful for lists and recursive structures
  • fused our fmaps
  • only needed compose / (.) and a newtype
  • “just a cute trick”
  • functions are bad actually
slide-147
SLIDE 147

ContT

(all of the above) (… and more)

“mother of all monads”

(because functions)

… but what I really should have called this talk was “ContT a la carte”. ContT is a more powerful version of Codensity, permitting wild and wacky control flow.

slide-148
SLIDE 148

ContT

(all of the above) (… and more)

“mother of all monads” à la carte

(because functions)

… but what I really should have called this talk was “ContT a la carte”. ContT is a more powerful version of Codensity, permitting wild and wacky control flow.

slide-149
SLIDE 149

newtype BindK m a = BindK { runBindK :: forall b. (a -> m b) -> m b }

slide-150
SLIDE 150

newtype BindK m a = BindK { runBindK :: forall b. (a -> m b) -> m b } newtype ContT b m a = ContT { runContT :: (a -> m b) -> m b }

slide-151
SLIDE 151

ByteString Builder

Like DList, but with special low-level operations.

The “DList trick” is very common in the wild. You probably use ByteString Builders all the time. A Builder is just a newtyped function, and we append them with function composition. The concrete append used under the hood is low-level magic.

slide-152
SLIDE 152

Lazy Text Builder

Like DList, but with special low-level operations.

Likewise, Lazy Text Builder is just a newtyped function with composition and low-level operations.

slide-153
SLIDE 153

Foldl

Gabriel Gonzalez’s great Foldl library looks a lot like MapK. It is a CPS embedding of left folds, producing fold fusion. This involves building up a fold function using function composition, just like we accumulated mapping functions in MapK. The result: folds make only one pass over the data. Loop fusion!

slide-154
SLIDE 154

generic-traverse

The master-class for this technique is Eric Mertens’ generic-traverse. (This is just a teaching exercise, not a library.) He uses a stack of CPS-embedded primitives up to Applicative to fuse things together. When used to derive generic functions, the newtypes go away, and the composition chains are in the perfect shape for GHC’s rewrite rules. We end up with derived functions that are as good as hand-written code.

slide-155
SLIDE 155

Recommended Reading

  • John Hughes, A Novel Representation of Lists
  • Tom Ellis, Demystifying DList
  • Dan Piponi, The Mother of All Monads
  • Ed Kmett, Free Monads for Less
  • Oleg Kiselyov and Atze van der Ploeg, Reflection Without

Remorse

  • Getty Ritter, Resources, Laziness and CPS
  • Eric Mertens, generic-traverse