string diagram semantics for functional languages with
play

String-diagram semantics for functional languages with data-flow - PowerPoint PPT Presentation

String-diagram semantics for functional languages with data-flow Steven Cheung & Dan Ghica & Koko Moruya University of Birmingham 9th Sep 2017 GoI machine [Mackie 95] [Danos & Regnier 99] Geometry of interaction [Girard


  1. String-diagram semantics for functional languages with data-flow Steven Cheung & Dan Ghica & Koko Moruya University of Birmingham 9th Sep 2017

  2. GoI machine [Mackie ’95] [Danos & Regnier ’99] • Geometry of interaction [Girard ’89] • a λ -term → a graph (proof-net) • evaluation = specific path in the fixed graph • ✔ call-by-name, ? call-by-value

  3. Dynamic GoI machine [Muroya & Ghica ’17] • Framework for defining FPL semantics • Combine token passing & graph rewriting • di ff erent interleaving = di ff erent strategy

  4. Dynamic GoI machine [Muroya & Ghica ’17] C ( λ x.x + x) (1 + 1) D D + 1 1 λ ! ! ! D D D + @

  5. Dynamic GoI machine [Muroya & Ghica ’17] C ( λ x.x + x) (1 + 1) D D + 1 1 λ ! ! ! D D D + @

  6. Dynamic GoI machine [Muroya & Ghica ’17] C C ( λ x.x + x) (1 + 1) → ( λ x.x + x) 2 D D D D + + 1 1 λ λ ! ! ! D D ! 2 D + D ! @ @

  7. Dynamic GoI machine [Muroya & Ghica ’17] C ( λ x.x + x) (1 + 1) → ( λ x.x + x) 2 D D + λ ! 2 D ! @

  8. Dynamic GoI machine [Muroya & Ghica ’17] C 2 ( λ x.x + x) (1 + 1) → ( λ x.x + x) 2 D D → x + x [x ↦ 2] ! + C λ D D ! 2 + D ! @

  9. Dynamic GoI machine [Muroya & Ghica ’17] 2 ( λ x.x + x) (1 + 1) → ( λ x.x + x) 2 → x + x [x ↦ 2] ! C D D +

  10. Dynamic GoI machine [Muroya & Ghica ’17] 2 2 ( λ x.x + x) (1 + 1) → ( λ x.x + x) 2 → x + x [x ↦ 2] ! ! 2 → x + 2 [x ↦ 2] C C ! D D D D + +

  11. Dynamic GoI machine [Muroya & Ghica ’17] 2 ( λ x.x + x) (1 + 1) → ( λ x.x + x) 2 → x + x [x ↦ 2] ! 2 → x + 2 [x ↦ 2] → 2 + 2 C ! D D +

  12. Dynamic GoI machine [Muroya & Ghica ’17] 2 ( λ x.x + x) (1 + 1) → ( λ x.x + x) 2 → x + x [x ↦ 2] 4 ! 2 → x + 2 [x ↦ 2] → 2 + 2 → 4 C ! ! D D +

  13. Dynamic GoI machine [Muroya & Ghica ’17] C ( λ x.x + x) (1 + 1) D D + 1 1 λ ! ! ! D D D + @

  14. DGoIM & data-flow model • Machine graph ~ data-flow graph • GoI-style semantics • natural & intuitive • make the complex operational semantics tractable • Examples: • PL for Machine learning • PL for Self-adjusting computation

  15. Machine learning • m(x) = W * x + b • init: W = 1, b = 0 • tune the value of W and b to minimise the loss function • TensorFlow

  16. TensorFlow import tensorflow as tf # Model parameters W = tf.Variable([1], dtype=tf.float32) b = tf.Variable([0], dtype=tf.float32) # Model input and output x = tf.placeholder(tf.float32) linear_model = W * x + b y = tf.placeholder(tf.float32) # loss loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares # optimizer optimizer = tf.train.GradientDescentOptimizer(0.01) train = optimizer.minimize(loss) # training data x_train = [1, 2, 3, 4] y_train = [0, -1, -2, -3] # training loop init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) # reset values to wrong for i in range(1000): sess.run(train, {x: x_train, y: y_train}) # evaluate training accuracy curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x: x_train, y: y_train}) print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))

  17. TensorFlow import tensorflow as tf # Model parameters W = tf.Variable([1], dtype=tf.float32) b = tf.Variable([0], dtype=tf.float32) # Model input and output x = tf.placeholder(tf.float32) linear_model = W * x + b y = tf.placeholder(tf.float32) # loss loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares # optimizer optimizer = tf.train.GradientDescentOptimizer(0.01) train = optimizer.minimize(loss) # training data x_train = [1, 2, 3, 4] y_train = [0, -1, -2, -3] # training loop init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) # reset values to wrong for i in range(1000): sess.run(train, {x: x_train, y: y_train}) # evaluate training accuracy curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x: x_train, y: y_train}) print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))

  18. TensorFlow import tensorflow as tf # Model parameters W = tf.Variable([1], dtype=tf.float32) b = tf.Variable([0], dtype=tf.float32) # Model input and output x = tf.placeholder(tf.float32) linear_model = W * x + b y = tf.placeholder(tf.float32) # loss loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares # optimizer optimizer = tf.train.GradientDescentOptimizer(0.01) train = optimizer.minimize(loss) # training data x_train = [1, 2, 3, 4] y_train = [0, -1, -2, -3] # training loop init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) # reset values to wrong for i in range(1000): sess.run(train, {x: x_train, y: y_train}) # evaluate training accuracy curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x: x_train, y: y_train}) print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))

  19. TensorFlow import tensorflow as tf # Model parameters W = tf.Variable([1], dtype=tf.float32) b = tf.Variable([0], dtype=tf.float32) # Model input and output x = tf.placeholder(tf.float32) linear_model = W * x + b y = tf.placeholder(tf.float32) # loss loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares # optimizer optimizer = tf.train.GradientDescentOptimizer(0.01) train = optimizer.minimize(loss) # training data x_train = [1, 2, 3, 4] y_train = [0, -1, -2, -3] # training loop init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) # reset values to wrong for i in range(1000): sess.run(train, {x: x_train, y: y_train}) # evaluate training accuracy curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x: x_train, y: y_train}) print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))

  20. DSL for machine learning let loss f p = … in let grad_desc f p loss rate = … in let linear_model x = {1} * x + {0} in let f@p = linear_model in let linear_model’ = f (grad_desc f p loss rate) in …

  21. DSL for machine learning let loss f p = … in let grad_desc f p loss rate = … in let linear_model x = {1} * x + {0} in let f@p = linear_model in let linear_model’ = f (grad_desc f p loss rate) in …

  22. DSL for machine learning let loss f p = … in let grad_desc f p loss rate = … in let linear_model x = {1} * x + {0} in let f@p = linear_model in let linear_model’ = f (grad_desc f p loss rate) in … “Abductive” decoupling let f@p = linear_model in … f ≜ λ p. λ x. p[0] * x + p[1] p ≜ [1, 0]

  23. DSL for machine learning let loss f p = … in let grad_desc f p loss rate = … in let linear_model x = {1} * x + {0} in let f@p = linear_model in let linear_model’ = f (grad_desc f p loss rate) in … “Abductive” decoupling let f@p = linear_model in … f ≜ λ p. λ x. p[0] * x + p[1] p ≜ [1, 0]

  24. Abductive decoupling 1 0 ¡ let f@p = λ x. {1} * x + {0} in A ¡ ¿ f p ? ¿ ? ¿ ᗡ ? ¿ D ᗡ ? D * D ! + @ ! ! D D λ λ ! @

  25. Abductive decoupling 1 1 ¡ 0 0 ¡ ¿ ¡ ¿ ? ¿ A ¡ ¿ ᗡ D ¿ ? ¿ ? ¿ ᗡ * ᗡ ? ¿ D + ᗡ ? D * ! D D ! + λ ! @ ! A ! D ? ? D λ ? D D ! λ ! @ @ ! D

  26. Abductive decoupling 1 ¡ 0 ¿ ¡ ¿ ? ¿ ᗡ D ¿ ᗡ * + ! D λ ! A ? ? ? D D ! @ ! D

  27. Abductive decoupling 1 ¡ 0 ¿ ¡ ¿ ? ¿ ᗡ D ¿ ᗡ * + ! D λ ! A ? ? ? D D ! @ ! D

  28. Abductive decoupling 1 P2 ¡ 0 ? ¿ ¡ ? ? ? ¿ ? ¿ ? D D ᗡ D ¿ D * ᗡ * + + ! ! D D λ λ ! [1,0] ! D ! A λ ? ? ! D ? D ! ? D D ! @ @ ! 1 0 ! D ¡ ¡ D Ɔ 0 Ɔ 0

  29. Abductive decoupling f ≜ λ p. λ x. p[0] * x + p[1] 1 P2 ¡ 0 ? ¿ ¡ ? ? ? ¿ ? ¿ ? D D ᗡ D ¿ D * ᗡ * + + ! ! D D λ λ p ≜ [1, 0] ! [1,0] ! D ! A λ ? ? ! D ? D ! ? D D ! @ @ ! 1 0 ! D ¡ ¡ D Ɔ 0 Ɔ 0

  30. Self-adjusting Computation [Acar ’05] • Spreadsheet meets functional programming • Adjust the output with minimal re-computation let x = 1, y = 2, m = x + 1, n = y + 1 in m + n x y 1 2 + 1 + 1 m n 2 3 + z 5

  31. Self-adjusting Computation [Acar ’05] • Spreadsheet meets functional programming • Adjust the output with minimal re-computation let x = 3, y = 2, m = x + 1, n = y + 1 in m + n x y 3 2 + 1 + 1 m n 4 3 + z 7

  32. Self-adjusting Computation [Acar ’05] • Spreadsheet meets functional programming • Adjust the output with minimal re-computation let x = 3, y = 2, m = x + 1, n = y + 1 in m + n x y 3 2 + 1 + 1 • ↓ re-computation = ↑ performance m n 4 3 • Dynamic dependency graph + memoisation + z 7

  33. DGoIM & SAC let x = {1} in C0 P 2 let y = x + 2 in (set x to 3); C ? C0 λ 3 ! prop; ? D λ ! ! D y ! D Δ + D @ @ λ ! D @ λ 1 ! ! D @

  34. DGoIM & SAC let x = {1} in C0 P 2 let y = x + 2 in (set x to 3); C ? C0 λ 3 ! prop; ? D λ ! ! D y ! D Δ + D @ @ λ ! D @ λ 1 ! ! D @

  35. DGoIM & SAC let x = {1} in C0 P 2 C0 P 2 let y = x + 2 in (set x to 3); C ? C0 λ 3 ! C ? C0 λ 3 ! prop; ? D λ ! ! D ? D λ ! ! D y ! D Δ + ! D Δ + D @ D @ @ @ λ λ ! ! D D @ @ λ 1 1 λ 1 ! ! ! ! ! D M D @ @

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend