finding fixed points faster
play

Finding fixed points faster Michael Arntzenius University of - PowerPoint PPT Presentation

Finding fixed points faster Michael Arntzenius University of Birmingham HOPE @ ICFP 2018 Datalog Datafun + + semi-nave incremental -calculus evaluation 1 Datalog 3 Datafun + + 2 semi-nave 4 incremental -calculus


  1. Finding fixed points faster Michael Arntzenius University of Birmingham HOPE @ ICFP 2018

  2. Datalog Datafun + + ⊆ semi-naïve incremental λ -calculus evaluation

  3. 1 Datalog 3 Datafun + + ⊆ 2 semi-naïve 4 incremental λ -calculus evaluation

  4. Datalog decidable logic programming predicates = finite sets

  5. Transitive closure of edge : path ( x , z ) ← edge ( x , z ) path ( x , z ) ← edge ( x , y ) ∧ path ( y , z )

  6. Transitive closure of edge , naïvely: path i + 1 ( x , z ) ← edge ( x , z ) path i + 1 ( x , z ) ← edge ( x , y ) ∧ path i ( y , z )

  7. . . . path 3 ( 2, 4 ) ← edge ( 2, 3 ) ∧ path 2 ( 3, 4 ) i = 3 i = 4 path 4 ( 2, 4 ) ← edge ( 2, 3 ) ∧ path 3 ( 3, 4 ) i = 5 path 5 ( 2, 4 ) ← edge ( 2, 3 ) ∧ path 4 ( 3, 4 ) . . . Wastefully re-deducing old facts makes me :(

  8. Transitive closure of edge , seminaïvely: ∆ path 0 ( x , z ) ← edge ( x , z ) ∆ path i + 1 ( x , z ) ← edge ( x , y ) ∧ ∆ path i ( y , z ) path i + 1 ( x , y ) ← path i ( x , y ) ∨ ∆ path i ( x , y ) Computes the changes between naïve iterations!

  9. ii. datafun

  10. path ( x , z ) ← edge ( x , z ) path ( x , z ) ← edge ( x , y ) ∧ path ( y , z ) path = edge ∪ { ( x , z ) | ( x , y ) ∈ edge, ( y , z ) ∈ path }

  11. Datalog path ( x , z ) ← edge ( x , z ) path ( x , z ) ← edge ( x , y ) ∧ path ( y , z ) Datafun path = edge ∪ { ( x , z ) | ( x , y ) ∈ edge, ( y , z ) ∈ path }

  12. Datafun ◮ Simply-typed λ -calculus ◮ finite sets & monadic set comprehensions ◮ monotone † iterative fixed points For more, see Datafun: A functional Datalog [ICFP ’16]! † Come to my poster presentation on Monday to learn about types for monotonicity!

  13. path = edge ∪ { ( x , z ) | ( x , y ) ∈ edge, ( y , z ) ∈ path }

  14. step S = edge ∪ { ( x , z ) | ( x , y ) ∈ edge, ( y , z ) ∈ S } path = fix step

  15. step S = edge ∪ { ( x , z ) | ( x , y ) ∈ edge, ( y , z ) ∈ S } path = fix step How do we compute ( fix f ) , naïvely? x 0 = ∅ x i + 1 = f ( x i ) Iterate until x i = x i + 1 .

  16. Incremental λ -Calculus “A Theory of Changes for Higher-Order Languages”, PLDI ’14 Yufei Cai, Paulo Giarrusso, Tillman Rendel, Klaus Ostermann f : A → B δf : A → ∆A → ∆B

  17. f : S et A → Set A δf : Set A → Set A → Set A

  18. f : S et A → Set A δf : Set A → Set A → Set A = ∅ = f ∅ x 0 dx 0 x i + 1 = x i ∪ dx i dx i + 1 = δf x i dx i Theorem: x i = f i x

  19. iii. details and complications Pick your poison! 1. Precise vs. cheap derivatives 2. Monotonicity and ordering 3. Sum types are tricky 4. Sets of functions are inefficient 5. Derivatives suck if you don’t optimise them

  20. F or every type A ◮ a change type ∆A ◮ a zero function 0 : A → ∆A ◮ and an update function ⊕ : A → ∆A → A For every term x : A ⊢ M : B , ◮ a derivative x : A , dx : ∆Γ ⊢ δM : ∆A ◮ such that M ⊕ δM = M [( x ⊕ dx ) /x ]

  21. 1. Precise vs cheap derivatives δ ( M ∪ N ) = δM ∪ δN vs δ ( M ∪ N ) = ( δM \ N ) ∪ ( δN \ M )

  22. 2. Monotonicity and ordering + A → B vs A → B ∆ ( A → B ) = A → ∆A → ∆B + + → B ) = A → ∆A → ∆B ∆ ( A ( dx � dy : ∆A ⇐ ⇒ ( ∀ a ) a ⊕ da � a ⊕ db : A ) ? Increasing changes only? What about incrementalizing Datafun? Why do discrete functions need derivatives if their arguments can’t change?

  23. 3. Sum types are tricky ∆ ( A + B ) = ∆A × ∆B ? = ∆A ∪ ∆B ? = ∆A + ∆B δ ( case M of in 1 x → N 1 ; in 2 y → N 2 ) = case ( M , δM ) of ( in 1 x , in 1 dx ) → δN 1 ( in 2 y , in 2 dy ) → δN 2 ( in 1 x , in 2 dy ) → ??? ( in 2 x , in 1 dy ) → ???

  24. 4. S ets of functions are inefficient δ ( � ( x ∈ M ) N ) = ( � ( x ∈ δM ) N ) ∪ ( � ( x ∈ M ∪ δM ) let dx = 0 x in δN )

  25. 4. S ets of functions are inefficient δ ( � ( x ∈ M ) N ) = ( � ( x ∈ δM ) N ) ∪ ( � ( x ∈ M ∪ δM ) let dx = 0 x in δN ) What is ( 0 f ) for f : A → B ? It’s the derivative of f .

  26. 5. Derivatives suck if you don’t optimise them X ∩ Y = { x | x ∈ X , x ∈ Y } = � ( x ∈ X ) � ( y ∈ Y ) if x = y then { x } else ∅ δ ( � ( x ∈ M ) N ) = ( � ( x ∈ δM ) N ) ∪ ( � ( x ∈ M ∪ δM ) let dx = 0 x in δN ) δ ( X ∩ Y ) = horrible!

  27. fin

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend