mercury enabling remote procedure call for high
play

Mercury: Enabling Remote Procedure Call for High-Performance - PowerPoint PPT Presentation

Mercury: Enabling Remote Procedure Call for High-Performance Computing J. Soumagne, D. Kimpe , J. Zounmevo, M. Chaarawi, Q. Koziol, A. Afsahi, and R. Ross The HDF Group, Argonne National Laboratory , Queens University November 26, 2013 RPC


  1. Mercury: Enabling Remote Procedure Call for High-Performance Computing J. Soumagne, D. Kimpe , J. Zounmevo, M. Chaarawi, Q. Koziol, A. Afsahi, and R. Ross The HDF Group, Argonne National Laboratory , Queen’s University November 26, 2013

  2. RPC and High-Performance Computing Remote Procedure Call (RPC) Allow local calls to be transparently executed on remote resources Already widely used to support distributed services – Google Protocol Buffers, Facebook Thrift, CORBA, Java RMI, etc. 2

  3. RPC and High-Performance Computing Remote Procedure Call (RPC) Allow local calls to be transparently executed on remote resources Already widely used to support distributed services – Google Protocol Buffers, Facebook Thrift, CORBA, Java RMI, etc. Typical HPC applications are SPMD No need for RPC: control flow implicit on all nodes A series of SPMD programs sequentially produce & analyze data 2

  4. RPC and High-Performance Computing Remote Procedure Call (RPC) Allow local calls to be transparently executed on remote resources Already widely used to support distributed services – Google Protocol Buffers, Facebook Thrift, CORBA, Java RMI, etc. Typical HPC applications are SPMD No need for RPC: control flow implicit on all nodes A series of SPMD programs sequentially produce & analyze data Distributed HPC workflow Nodes/systems dedicated to specific task Multiple SPMD applications/jobs execute concurrently and interact 2

  5. RPC and High-Performance Computing Remote Procedure Call (RPC) Allow local calls to be transparently executed on remote resources Already widely used to support distributed services – Google Protocol Buffers, Facebook Thrift, CORBA, Java RMI, etc. Typical HPC applications are SPMD No need for RPC: control flow implicit on all nodes A series of SPMD programs sequentially produce & analyze data Distributed HPC workflow Nodes/systems dedicated to specific task Multiple SPMD applications/jobs execute concurrently and interact Importance of RPC growing Compute nodes with minimal/non-standard environment Heterogeneous systems (node-specific resources) More “service-oriented” and more complex applications Workflows and in-situ instead of sequences of SPMD 2

  6. Mercury Objective Create a reusable RPC library for use in HPC that can serve as a basis for services such as storage systems, I/O forwarding, analysis frameworks and other forms of inter-application communication 3

  7. Mercury Objective Create a reusable RPC library for use in HPC that can serve as a basis for services such as storage systems, I/O forwarding, analysis frameworks and other forms of inter-application communication Why not reuse existing RPC frameworks? – Do not support efficient large data transfers or asynchronous calls – Mostly built on top of TCP/IP protocols ◮ Need support for native transport ◮ Need to be easy to port to new machines Similar approaches with some differences – I/O Forwarding Scalability Layer (IOFSL) – NEtwork Scalable Service Interface (Nessie) – Lustre RPC 3

  8. Overview RPC proc RPC proc Client Server 4

  9. Overview Function arguments / metadata transferred with RPC request – Two-sided model with unexpected / expected messaging – Message size limited to a few kilobytes Metadata (unexpected + expected messaging) RPC proc RPC proc Client Server 4

  10. Overview Function arguments / metadata transferred with RPC request – Two-sided model with unexpected / expected messaging – Message size limited to a few kilobytes Bulk data (more later) transferred using separate and dedicated API – One-sided model that exposes RMA semantics Metadata (unexpected + expected messaging) RPC proc RPC proc Client Server Bulk Data (RMA transfer) 4

  11. Overview Function arguments / metadata transferred with RPC request – Two-sided model with unexpected / expected messaging – Message size limited to a few kilobytes Bulk data (more later) transferred using separate and dedicated API – One-sided model that exposes RMA semantics Network Abstraction Layer – Allows definition of multiple network plugins – Two functional plugins MPI (MPI2) and BMI but implement one-sided over two-sided – More plugins to come Metadata (unexpected + expected messaging) RPC proc RPC proc Client Server Bulk Data (RMA transfer) Network Abstraction Layer 4

  12. Remote Procedure Call Internal Details: Please forget soon! Mechanism used to send an RPC request id 1 id N id 1 id N ... ... Client Server 5

  13. Remote Procedure Call Internal Details: Please forget soon! Mechanism used to send an RPC request 1. Register call 1. Register call and get request id and get request id id 1 id N id 1 id N ... ... Client Server 5

  14. Remote Procedure Call Internal Details: Please forget soon! Mechanism used to send an RPC request id 1 id N id 1 id N ... ... 2. Post unexpected send with request id and serialized parameters + Pre-post receive for server response Client Server 2. Post receive for unexpected request 5

  15. Remote Procedure Call Internal Details: Please forget soon! Mechanism used to send an RPC request id 1 id N id 1 id N ... ... Client Server 3. Execute call 5

  16. Remote Procedure Call Internal Details: Please forget soon! Mechanism used to send an RPC request id 1 id N id 1 id N ... ... Client Server 4. Post send with serialized response 4. Test completion of send / receive requests 5

  17. Remote Procedure Call: Example Code Client snippet: open_in_t in_struct; open_out_t out_struct ; /* Initialize the interface */ [...] NA_Addr_lookup (network_class , server_name , & server_addr ); /* Register RPC call */ rpc_id = HG_REGISTER ( "open" , open_in_t , open_out_t ); /* Fill input parameters */ [...] in_struct.in_param0 = in_param0; /* Send RPC request */ HG_Forward (server_addr , rpc_id , &in_struct , &out_struct , & rpc_request ); /* Wait for completion */ HG_Wait(rpc_request , HG_MAX_IDLE_TIME , HG_STATUS_IGNORE ); /* Get output parameters */ [...] out_param0 = out_struct .out_param0 ; 6

  18. Remote Procedure Call: Example Code Server snippet (main loop): int main(int argc , void *argv []) { /* Initialize the interface */ [...] /* Register RPC call */ HG_HANDLER_REGISTER ( "open" , open_rpc , open_in_t , open_out_t ); /* Process RPC calls */ while (! finalized) { HG_Handler_process (timeout , HG_STATUS_IGNORE ); } /* Finalize the interface */ [...] } 7

  19. Remote Procedure Call: Example Code Server snippet (RPC callback): int open_rpc( hg_handle_t handle) { open_in_t in_struct; open_out_t out_struct ; /* Get input parameters and bulk handle */ HG_Handler_get_input (handle , &in_struct); [...] in_param0 = in_struct.in_param0; /* Execute call */ out_param0 = open(in_param0 , ...); /* Fill output structure */ open_out_struct . out_param0 = out_param0 ; /* Send response back */ HG_Handler_start_output (handle , &out_struct ); return HG_SUCCESS; } 8

  20. Bulk Data Transfers Definition Bulk Data: Variable length data that is (or could be) too large to send eagerly and might need special processing. Transfer controlled by server (better flow control) Memory buffer(s) abstracted by handle handles must be serialized and exchanged using other means Client Server 9

  21. Bulk Data Transfers Definition Bulk Data: Variable length data that is (or could be) too large to send eagerly and might need special processing. Transfer controlled by server (better flow control) Memory buffer(s) abstracted by handle handles must be serialized and exchanged using other means 1. Register local memory 1. Register local memory segment and get handle segment and get handle Client Server 9

  22. Bulk Data Transfers Definition Bulk Data: Variable length data that is (or could be) too large to send eagerly and might need special processing. Transfer controlled by server (better flow control) Memory buffer(s) abstracted by handle handles must be serialized and exchanged using other means 1. Register local memory 1. Register local memory segment and get handle segment and get handle 2. Send serialized memory handle Client Server 9

  23. Bulk Data Transfers Definition Bulk Data: Variable length data that is (or could be) too large to send eagerly and might need special processing. Transfer controlled by server (better flow control) Memory buffer(s) abstracted by handle handles must be serialized and exchanged using other means 1. Register local memory 1. Register local memory segment and get handle segment and get handle 2. Send serialized memory handle Client Server 3. Post put/get opera- tion using local/deseri- alized remote handles 9

  24. Bulk Data Transfers Definition Bulk Data: Variable length data that is (or could be) too large to send eagerly and might need special processing. Transfer controlled by server (better flow control) Memory buffer(s) abstracted by handle handles must be serialized and exchanged using other means 1. Register local memory 1. Register local memory segment and get handle segment and get handle 2. Send serialized memory handle Client Server 3. Post put/get opera- tion using local/deseri- alized remote handles 4. Test completion of remote put/get 9

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend