clipper
play

Clipper A Low-Latency Online Prediction Serving System Dan - PowerPoint PPT Presentation

Clipper A Low-Latency Online Prediction Serving System Dan Crankshaw crankshaw@cs.berkeley.edu http://clipper.ai https://github.com/ucbrise/clipper December 8, 2017 Serving Training Query Big Training Data Decision Model Application


  1. Clipper A Low-Latency Online Prediction Serving System Dan Crankshaw crankshaw@cs.berkeley.edu http://clipper.ai https://github.com/ucbrise/clipper December 8, 2017

  2. Serving Training Query Big Training Data Decision Model Application Prediction-Serving for interactive applications Timescale: ~10s of milliseconds

  3. Prediction-Serving Challenges ??? VW Create Caffe 3 Large and growing ecosystem Support low-latency, high- of ML models and frameworks throughput serving workloads

  4. Prediction-Serving Today Clipper aims to unify these approaches Query X Y New class of systems: Decision Prediction-Serving Systems Highly specialized systems for Offline scoring with existing specific problems frameworks and systems

  5. Clipper Decouples Applications and Models Applications Predict RPC/REST Interface Clipper RPC RPC RPC RPC Model Container (MC) MC MC MC Caffe

  6. Clipper RPC RPC RPC RPC MC MC MC Model Container (MC) Caffe Common Interface à Simplifies Deployment: Ø Evaluate models using original code & systems Ø Models run in separate processes as Docker containers Ø Resource isolation: Cutting edge ML frameworks can be buggy Ø Scale-out and deployment on Kubernetes

  7. Clipper Architecture Applications Predict Clipper Caching Latency-Aware Batching RPC RPC RPC RPC Model Container (MC) MC MC MC Caffe

  8. Status of the project https://github.com/ucbrise/clipper Ø First released in May 2017 with a focus on usability Ø Currently working towards 0.3 release and actively working with early users Focused on performance improvements and better monitoring and stability Ø Ø Supports native deployments on Kubernetes and a local Docker mode Ø Goal: Community-owned platform for model deployment and serving Post issues and questions on GitHub and subscribe to our mailing list clipper- Ø dev@googlegroups.com

  9. Simplifying Model Deployment with Clipper

  10. Getting Started with Clipper is Easy Docker images available on DockerHub Clipper admin is distributed as pip package: pip install clipper_admin Get up and running without cloning or compiling!

  11. Clipper Connects Training and Serving Worker Node Executor Web Server Task Task Worker Node Executor Driver Program Cache Clipper Task Task SparkContext Worker Node MC Executor Database Task Task

  12. Problem: Models don’t run in isolation Must extract model plus pre- and post-processing logic

  13. Clipper provides a library of model deployers Ø Deployer automatically and intelligently saves all prediction code Ø Captures both framework-specific models and arbitrary serializable code Ø Replicates required subset of training environment and loads prediction code in a Clipper model container

  14. Clipper provides a (growing) library of model deployers Ø Python Ø Combine framework specific models with external featurization, post-processing, business logic Ø Currently support Scikit-Learn, PySpark, TensorFlow Ø PyTorch, Caffe2, XGBoost coming soon Ø Scala and Java with Spark: Ø both MLLib and Pipelines APIs Ø Arbitrary R functions

  15. Ongoing Research

  16. Supporting Modular Multi-Model Pipelines Else If face Task- Slow but detected specific accurate Face model model detector If object Fast If confident detected Pre-trained model Object then return DNN detector Ensembles can Faster inference Faster development Model improve accuracy with prediction through model- specialization cascades reuse How to efficiently support serving arbitrary model pipelines?

  17. Challenges of Serving Model Pipelines Ø Complex tradeoff space of latency, throughput, and monetary cost Ø Many serving workloads are interactive and highly latency-sensitive Ø Performance and cost depend on model, workload, and physical resources available Ø Model composition leads to combinatorial explosion in the size of the tradeoff space Ø Developers must make decisions about how to configure individual models while reasoning about end-to-end pipeline performance

  18. Solution: Workload-Aware Optimizer Ø Exploit structure and properties of inference computation Immutable state Ø Query-level parallelism Ø Compute-intensive Ø Ø Pipeline definition Intermingle arbitrary application code and Clipper-hosted model evaluation for Ø maximum flexibility Ø Optimizer input Pipeline, sample workload, and performance or cost constraints Ø Ø Optimizer output Optimal pipeline configuration that meets constraints Ø Ø Deployed models use Clipper as physical execution engine for serving

  19. Conclusion Ø Challenges of serving increasingly complex models trained in variety of frameworks while meeting strict performance demands Ø Clipper adopts a container-based architecture and employs prediction caching and latency-aware batching Ø Clipper’s model deployer library makes it easy to deploy both framework-specific models and arbitrary processing code Ø Ongoing efforts on a workload-aware optimizer to optimize the deployment of complex, multi-model pipelines http://clipper.ai

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend