ad adap v ve p e placem emen ent f for or in in memory y
play

Ad Adap%v %ve P e Placem emen ent f for or In In-memory y - PowerPoint PPT Presentation

Ad Adap%v %ve P e Placem emen ent f for or In In-memory y Storage Func%ons Ankit Bhardwaj , Chinmay Kulkarni, and Ryan Stutsman University of Utah Utah Scalable Computer Systems Lab Introduction Kernel-bypass key-value stores offer


  1. Ad Adap%v %ve P e Placem emen ent f for or In In-memory y Storage Func%ons Ankit Bhardwaj , Chinmay Kulkarni, and Ryan Stutsman University of Utah Utah Scalable Computer Systems Lab

  2. Introduction • Kernel-bypass key-value stores offer < 10µs latency, Mops throughput • Fast because they are just dumb • Inefficient – Data movement, client stalls • Run application logic on the server? • Storage server can become bottleneck, effects propagates back to clients • Key-ideas: Put application logic in decoupled functions • Profile invocations & adaptively place to avoid bottlenecks • Challenge: efficiently shifting compute at microsecond-timescales

  3. Disaggrega%on Improves U%liza%on and Scaling Compute Storage Decouple Compute & Storage using Network Provision at idle Capacity Scale Independently

  4. Disaggrega%on Improves U%liza%on and Scaling Compute FaRM <10µs latency RAMCloud MOPS Throughput Storage Decouple Compute & Storage using Network Provision at idle Capacity Scale Independently

  5. But, Data Movement Has a Cost Compute Data Movement Data Movement Storage Massive Data Movement Destroys Efficiency So, push code to storage?

  6. Storage Function Requirements • Microsecond-scale -> low invocaNon cost • High-throughput, in-memory -> naNve code performance • Amenable to mulN-core processing • SoluNon: Splinter allows loadable compiled extensions of storage funcNons Splinter: Bare-Metal Extensions for Mul5-Tenant Low-Latency Storage

  7. Server-side Placement Can Improve Throughput Client RTT +RTT 4.5 (millions of tree traversals/sec) Client-side +RTT 4.0 3.5 3.0 Throughput 2.5 get()/put() 2.0 over 1.5 Network 1.0 0.5 Hash Table 0.0 1 2 3 4 5 6 7 8 Traversal Depth (operations/invocation) Server

  8. Server-side Placement Can Improve Throughput Client 4.5 (millions of tree traversals/sec) Client-side 4.0 Server-side 3.5 3.0 Throughput 50% 2.5 invoke() 2.0 over 1.5 Network 1.0 0.5 Hash Table 0.0 1 2 3 4 5 6 7 8 Traversal Depth (operations/invocation) Reduces (N-1) RPCs and RTTs Server

  9. Server-side Placement Can Improve Throughput Client FaRM Server-side invoke() over Network 200,000 400,000 0 ops/s/core ops/s/core Hash Table Facebook TAO graph operaNons perform 2x beTer as compared to state-of-the-art system FaRM Server

  10. Server-side Placement Can BoGleneck the Server • Server-side placement is good for data-intensive funcNons • Compute-intensive funcNons make the server CPU boTleneck • Overloaded server stops responding to even get()/put() requests • Overall system throughput drops

  11. Server-side Placement Can Bottleneck the Server 4.0 Client-side 22% Higher than (millions of invocations/s) 3.5 Server-side Client-side 3.0 Throughput 2.5 2.0 55% Lower than Client-side 1.5 1.0 0.5 0.0 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 Invocation Computation (cycles/invocation) Tree Depth 2

  12. What about Rebalancing and Load-Balancing? • Workload change can happen in two ways Load • Workload shiFs in funcGon call distribuGon over Gme Time • ShiFs in per-invocaGon costs Frequency • Migrate data only when the workload is stable Invoca/on Computa/on • Moving load to client and use the server CPU for migraNon

  13. Key Insight: Decoupled Func%ons Can Run Anywhere • Tenants write logically decoupled funcNons using standard get/put interface • Clients physically push and run funcNons server-side • Or the clients could run the funcNons locally

  14. Goal: The Best of Both Worlds 4.0 Client-side Data Compute (millions of invocations/s) 3.5 Server-side Intensive Intensive Ideal 3.0 Throughput 2.5 2.0 1.5 1.0 0.5 0.0 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 Invocation Computation (cycles/invocation)

  15. Adap%ve Storage Func%on Placement (ASFP) Server-side Storage FuncNon ExecuNon Client Compute Validate Server Get Get

  16. Adap%ve Storage Func%on Placement (ASFP) Server-side Storage FuncNon ExecuNon Client Compute Validate Server Get Get Pushed-back Storage FuncNon ExecuNon Compute Client Compute Compute Validate Validate Get Get Get Get Server Running heavy compute at client creates room for remaining work

  17. Adap%ve Storage Func%on Placement (ASFP) • Mechanisms • Server-side: Run Storage FuncGons, suspend, pushback to client • Client-side: RunGme, transparent remote data access • Consistency and concurrency control • Policies • InvocaGon Profiling & Cost Modeling • Overload detecGon

  18. Server-side Storage Func%on Execu%on Get (Local) Result Validation Committed/ Running Aborted Yield Pushback Schedule Invoke Ready Offload Server Overload State Change Request Response

  19. Server-side Storage Func%on Execu%on Get (Local) Result Validation Committed/ Running Aborted Yield Pushback Schedule Invoke Ready Offload Server Overload State Change Request Response

  20. Server-side Storage Function Execution Get (Local) Result Validation Committed/ Running Aborted Yield Pushback Schedule Invoke Ready Offload Server Overload State Change Request Response

  21. Server-side Storage Function Execution Get (Local) Result Validation Committed/ Running Aborted Yield Pushback Schedule Invoke Ready Offload Server Overload State Change Request Response

  22. Consistency and Concurrency Control • Problem: Invoke() tasks run concurrently on server on each core and pushed-back invocaNons run in parallel to the server tasks • Solu9on: Run invocaNons in strict serializable transacNons • Use opGmisGc concurrency control (OCC) • Read/Write set tracking is also used in pushback • Pushback invocaGon never generate work for Server • Server don’t need to maintain any state for pushed-back invocaGons

  23. Client-side Execu%on for Pushed-back Invoca%ons Committed/ Get Validate Aborted (in local Read Set) Awaiting Get (Remote) Running Validation Result Get Validation Yield Awaiting Completed Data Schedule Get Create Ready Install Pushback RW Set State Change Request Response

  24. Client-side Execu%on for Pushed-back Invoca%ons Committed/ Get Validate Aborted (in local Read Set) Awaiting Get (Remote) Running Validation Result Get Validation Yield Awaiting Completed Data Schedule Get Create Ready Install Pushback RW Set State Change Request Response

  25. Client-side Execu%on for Pushed-back Invoca%ons Committed/ Get Validate Aborted (in local Read Set) Awaiting Get (Remote) Running Validation Result Get Validation Yield Awaiting Completed Data Schedule Get Create Ready Install Pushback RW Set State Change Request Response

  26. Client-side Execu%on for Pushed-back Invoca%ons Committed/ Get Validate Aborted (in local Read Set) Awaiting Get (Remote) Running Validation Result Get Validation Yield Awaiting Completed Data Schedule Get Create Ready Install Pushback RW Set State Change Request Response

  27. Client-side Execution for Pushed-back Invocations Committed/ Get Validate Aborted (in local Read Set) Awaiting Get (Remote) Running Validation Result Get Validation Yield Awaiting Completed Data Schedule Get Create Ready Install Pushback RW Set State Change Request Response

  28. Client-side Execution for Pushed-back Invocations Committed/ Get Validate Aborted (in local Read Set) Awaiting Get (Remote) Running Validation Result Get Validation Yield Awaiting Completed Data Schedule Get Create Ready Install Pushback RW Set State Change Request Response

  29. Client-side Execu%on for Pushed-back Invoca%ons Committed/ Get Validate Aborted (in local Read Set) Awaiting Get (Remote) Running Validation Result Get Validation Yield Awaiting Completed Data Schedule Get Create Ready Install Pushback RW Set State Change Request Response

  30. Adap%ve Storage Func%on Placement (ASFP) • Mechanism • Server-side: Storage FuncGons, suspend, move back to client • Client-side: RunGme, transparent remote data access • Consistency and Concurrency Control • Policy • Server Overload DetecGon • InvocaGon Profiling and ClassificaGon

  31. Server Overload Detec%on PollRecvQueue • Always run the invocaNons on PacketToTask server, if underloaded No #OldTasks > t • Guarantees Yes • Start pushback only when there No are some old tasks and server #NewTasks > t receives even more tasks Yes • Keep at least 𝑢 tasks even aFer pushback, to avoid server idleness Classify&Pushback AddTasksToQueue • Consider only invoke() tasks for overload detecGon Pushback ExecuteTasks-RR Shenango: Achieving High CPU Efficiency for Latency-sensi5ve Datacenter Workloads

  32. Invoca%on Profiling and Classifica%on • Profile each invocaNon for Nme spent in compute and data access • Classify an invocaNon compute-bound if • Spent more Gme in compute than data access • Crossed a threshold 𝑑 > 𝑜𝐸 • 𝑑 𝑗𝑡 𝑏𝑛𝑝𝑣𝑜𝑢 𝑝𝑔 𝑑𝑝𝑛𝑞𝑣𝑢𝑓 𝑒𝑝𝑜𝑓 𝑐𝑧 𝑢ℎ𝑓 𝑗𝑜𝑤𝑝𝑑𝑏𝑢𝑗𝑝𝑜 • 𝑜 𝑗𝑡 𝑢ℎ𝑓 𝑢𝑝𝑢𝑏𝑚 𝑜𝑣𝑛𝑐𝑓𝑠 𝑝𝑔 𝑒𝑏𝑢𝑏 𝑏𝑑𝑑𝑓𝑡𝑡 𝑢𝑗𝑚𝑚 𝑜𝑝𝑥 • 𝐸 𝑗𝑡 𝐷𝑄𝑉 𝑑𝑝𝑡𝑢 𝑢𝑝 𝑞𝑠𝑝𝑑𝑓𝑡𝑡 𝑝𝑜𝑓 𝑠𝑓𝑟𝑣𝑓𝑡𝑢

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend