distributed tensorflow
play

Distributed TensorFlow Stony Brook University CSE545, Fall 2017 - PowerPoint PPT Presentation

Distributed TensorFlow Stony Brook University CSE545, Fall 2017 Goals Understand TensorFlow as a workflow system. Know the key components of TensorFlow. Understand the key concepts of distributed TensorFlow. Do basic


  1. Distributed TensorFlow Stony Brook University CSE545, Fall 2017

  2. Goals ● Understand TensorFlow as a workflow system. ● Know the key components of TensorFlow. ● Understand the key concepts of distributed TensorFlow. ● Do basic analysis in distributed TensorFlow. Will not know but will be easier to pick up ● How deep learning works ● What is a CNN ● What is an RNN (or LSTM, GRU)

  3. TensorFlow A workflow system catered to numerical computation. Like Spark, but uses tensors instead of RDDs .

  4. TensorFlow A workflow system catered to numerical computation. Like Spark, but uses tensors instead of RDDs . A multi-dimensional matrix (i.stack.imgur.com)

  5. TensorFlow A workflow system catered to numerical computation. Like Spark, but uses tensors instead of RDDs . A 2-d tensor is just a matrix. 1-d: vector 0-d: a constant / scalar Note: Linguistic ambiguity: Dimensions of a Tensor =/= Dimensions of a Matrix (i.stack.imgur.com)

  6. TensorFlow A workflow system catered to numerical computation. Like Spark, but uses tensors instead of RDDs . Example: Image definitions from assignment 2: image[ row ][ column ][ rgbx ]

  7. TensorFlow A workflow system catered to numerical computation. Like Spark, but uses tensors instead of RDDs . Technically, less abstract than RDDs which could hold tensors as well as many other data structures (dictionaries/HashMaps, Trees, ...etc…). Then, what is valuable about TensorFlow?

  8. TensorFlow Efficient, high-level built-in linear algebra and machine learning operations (i.e. transformations). enables complex models, like deep learning Technically, less abstract than RDDs which could hold tensors as well as many other data structures (dictionaries/HashMaps, Trees, ...etc…). Then, what is valuable about TensorFlow?

  9. TensorFlow Efficient, high-level built-in linear algebra and machine learning operations . enables complex models, like deep learning (Bakshi, 2016, “What is Deep Learning? Getting Started With Deep Learning”)

  10. TensorFlow Efficient, high-level built-in linear algebra and machine learning operations . (Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., ... & Ghemawat, S. (2016). Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 .)

  11. Tensor Flow Operations on tensors are often conceptualized as graphs: (Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., ... & Ghemawat, S. (2016). Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 .)

  12. Tensor Flow Operations on tensors are often conceptualized as graphs: A simpler example: d=b+c e=c+2 a=d ∗ e Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., ... & (Adventures in Machine Learning. Ghemawat, S. (2016). Tensorflow: Large-scale machine learning on Python TensorFlow Tutorial , 2017) heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 .

  13. * technically, operations that work with tensors. Ingredients of a TensorFlow tensors* operations variables - persistent an abstract computation mutable tensors (e.g. matrix multiply, add) constants - constant executed by device kernels placeholders - from data graph session devices defines the environment in the specific devices (cpus or gpus) on which to run the which operations run . (like a Spark context) session.

  14. * technically, operations that work with tensors. Ingredients of a TensorFlow tensors* operations ○ tf.Variable(initial_value, name) variables - persistent an abstract computation ○ tf.constant(value, type, name) mutable tensors (e.g. matrix multiply, add) ○ tf.placeholder(type, shape, name) constants - constant executed by device kernels placeholders - from data graph session devices defines the environment in the specific devices (cpus or gpus) on which to run the which operations run . (like a Spark context) session.

  15. Operations tensors* operations variables - persistent an abstract computation mutable tensors (e.g. matrix multiply, add) constants - constant executed by device kernels placeholders - from data

  16. Sessions tensors* ● Places operations on devices operations variables - persistent an abstract computation mutable tensors ● Stores the values of variables (when not distributed) (e.g. matrix multiply, add) constants - constant executed by device kernels placeholders - from data ● Carries out execution: eval() or run() graph session devices defines the environment in the specific devices (cpus or gpus) on which to run the which operations run . (like a Spark context) session.

  17. * technically, operations that work with tensors. Ingredients of a TensorFlow tensors* operations variables - persistent an abstract computation mutable tensors (e.g. matrix multiply, add) constants - constant executed by device kernels placeholders - from data graph session devices defines the environment in the specific devices (cpus or gpus) on which to run the which operations run . (like a Spark context) session.

  18. Demo Ridge Regression (L2 Penalized linear regression, ) Matrix Solution:

  19. Demo Ridge Regression (L2 Penalized linear regression, ) Gradient descent needs to solve. (Mirrors many parameter optimization problems.) Matrix Solution:

  20. Gradients Ridge Regression (L2 Penalized linear regression, ) Gradient descent needs to solve. (Mirrors many parameter optimization problems.) TensorFlow has built-in ability to derive gradients given a cost function.

  21. Gradients Ridge Regression (L2 Penalized linear regression, ) Gradient descent needs to solve. (Mirrors many parameter optimization problems.) TensorFlow has built-in ability to derive gradients given a cost function. tf.gradients(cost, [params])

  22. Gradients TensorFlow has built-in ability to derive gradients given a cost function. tf.gradients(cost, [params])

  23. Distributed TensorFlow Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., ... & Kudlur, M. (2016, November). TensorFlow: A System for Large-Scale Machine Learning. In OSDI (Vol. 16, pp. 265-283).

  24. Distributed TensorFlow Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., ... & Kudlur, M. (2016, November). TensorFlow: A System for Large-Scale Machine Learning. In OSDI (Vol. 16, pp. 265-283).

  25. Distributed TensorFlow: Full Pipeline Abadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., ... & Kudlur, M. (2016, November). TensorFlow: A System for Large-Scale Machine Learning. In OSDI (Vol. 16, pp. 265-283).

  26. Local Distribution Multiple devices on single machine Program 1 Program 2 CPU:0 CPU:1 GPU:0

  27. Local Distribution Multiple devices on single machine with tf.device(“/cpu:1”) with tf.device(“/gpu:0”) beta=tf.Variable(...) y_pred=tf.matmul(beta,X) CPU:0 CPU:1 GPU:0

  28. Cluster Distribution Multiple devices on multiple machines with tf.device(“/cpu:1”) with tf.device(“/gpu:0”) beta=tf.Variable(...) y_pred=tf.matmul(beta,X) Machine A Machine B CPU:0 CPU:1 GPU:0

  29. Cluster Distribution Multiple devices on multiple machines with tf.device(“/cpu:1”) with tf.device(“/gpu:0”) beta=tf.Variable(...) y_pred=tf.matmul(beta,X) Transfer tensors between machines? Machine A Machine B CPU:0 CPU:1 GPU:0

  30. Cluster Distribution “ps” “worker” task 0 task 0 task 1 TF Server TF Server TF Server Master Master Master Worker Worker Worker CPU:0 CPU:1 CPU:0 GPU:0 Machine A (Geron, 2017: HOML: p.324) Machine B

  31. Cluster Distribution “ps” “worker” task 0 task 0 task 1 Parameter Server: Job is just to maintain TF Server TF Server TF Server values of variables being optimized. Master Master Master Workers: do all the numerical “work” and send updates to the parameter server. Worker Worker Worker CPU:0 CPU:1 CPU:0 GPU:0 Machine A (Geron, 2017: HOML: p.324) Machine B

  32. Summary ● TF is a workflow system, where records are always tensors ○ operations applied to tensors (as either Variables, constants, or placeholder) ● Optimized for numerical / linear algebra ○ automatically finds gradients ○ custom kernels for given devices ● “Easily” distributes ○ Within a single machine (local: many devices)) ○ Across a cluster (many machines and devices) ○ Jobs broken up as parameter servers / workers makes coordination of data efficient

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend