network and load aware resource manager for mpi programs
play

Network and Load-Aware Resource Manager for MPI Programs Ashish - PowerPoint PPT Presentation

Network and Load-Aware Resource Manager for MPI Programs Ashish Kumar Naman Jain Preeti Malakar Indian Institite of Technology, Kanpur SRMPDS, International Conference on Parallel Processing 2020 Ashish Kumar, Naman Jain, Preeti Malakar


  1. Network and Load-Aware Resource Manager for MPI Programs Ashish Kumar Naman Jain Preeti Malakar Indian Institite of Technology, Kanpur SRMPDS, International Conference on Parallel Processing 2020 Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 1 / 26

  2. Introduction Distributed-memory parallel programs and MPI More than one processing element using their own local memory. Nodes work cooperatively to solve a single big problem. Data exchange through communications by sending and receiving messages. Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 2 / 26

  3. Introduction Distributed-memory parallel programs and MPI More than one processing element using their own local memory. Nodes work cooperatively to solve a single big problem. Data exchange through communications by sending and receiving messages. Uses Message Passing Interface (MPI) as ”de facto” standard for message passing. Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 2 / 26

  4. Introduction Distributed-memory parallel programs and MPI More than one processing element using their own local memory. Nodes work cooperatively to solve a single big problem. Data exchange through communications by sending and receiving messages. Uses Message Passing Interface (MPI) as ”de facto” standard for message passing. Runs on cluster (shared or dedicated) or a supercomputer. Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 2 / 26

  5. Introduction Need for node allocation to run MPI jobs. Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 3 / 26

  6. Introduction Need for node allocation to run MPI jobs. In this work, we address the problem of allocating a good set of nodes to run MPI jobs in a shared non-dedicated cluster. Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 3 / 26

  7. Non Dedicated/Shared Cluster and challenges Non exclusive access of nodes Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 4 / 26

  8. Non Dedicated/Shared Cluster and challenges Non exclusive access of nodes Shared among many users, same node can be used by different users/processes at same time for different purposes. Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 4 / 26

  9. Non Dedicated/Shared Cluster and challenges Non exclusive access of nodes Shared among many users, same node can be used by different users/processes at same time for different purposes. Variation in resource uses across time/nodes Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 4 / 26

  10. Non Dedicated/Shared Cluster and challenges Non exclusive access of nodes Shared among many users, same node can be used by different users/processes at same time for different purposes. Variation in resource uses across time/nodes Which nodes to run our job on? What parameters should be considered? Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 4 / 26

  11. Node Resource Usage Variation Variations in node resource usage across time and node in shared cluster Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 5 / 26

  12. Network Usage Variation Variation in network usage between nodes in shared cluster Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 6 / 26

  13. Going towards our approach Use knowledge of these variations across nodes, time and network to allocate resources in a better way. Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 7 / 26

  14. Going towards our approach Use knowledge of these variations across nodes, time and network to allocate resources in a better way. Take into account both static and dynamic attributes of resources, including network availability. Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 7 / 26

  15. Overview Node Allocation Algorithm 1 Resource Monitoring 2 Experiments 3 Conclusions and Future Work 4 Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 8 / 26

  16. Allocation as Sub Graph Selection v 1 G = ( V , E ) 90 80 60 v 2 v 4 85 75 90 v 3 Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 9 / 26

  17. Allocation as Sub Graph Selection v 1 G = ( V , E ) 90 80 Vertex v ∈ V : compute node 60 having compute load CL v and v 2 v 4 available processor count pc v 85 75 90 v 3 Node Compute load #Cores 50.2 6 v 1 43.5 8 v 2 54.7 10 v 3 38.3 4 v 4 Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 9 / 26

  18. Allocation as Sub Graph Selection v 1 G = ( V , E ) 90 80 Vertex v ∈ V : compute node 60 having compute load CL v and v 2 v 4 available processor count pc v 85 Edge e ∈ E : network load 75 90 NL ( u , v ) between compute nodes. v 3 Node Compute load #Cores 50.2 6 v 1 43.5 8 v 2 54.7 10 v 3 38.3 4 v 4 Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 9 / 26

  19. Allocation as Sub Graph Selection v 1 G = ( V , E ) 90 80 Vertex v ∈ V : compute node 60 having compute load CL v and v 2 v 4 available processor count pc v 85 Edge e ∈ E : network load 75 90 NL ( u , v ) between compute nodes. v 3 n : number of processes to be allocated Node Compute load #Cores Find a sub-graph such that the 50.2 6 v 1 overall cost/load of the 43.5 8 v 2 sub-graph is minimized and 54.7 10 v 3 process demand is fulfilled. 38.3 4 v 4 Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 9 / 26

  20. Some Definitions Compute load: measure of overall load on the node Static attributes: clock speed, core count, total memory. Dynamic attributes: CPU load, CPU utilization, available memory Compute load, CL v = � a ∈ attributes w a ∗ val va Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 10 / 26

  21. Some Definitions Compute load: measure of overall load on the node Static attributes: clock speed, core count, total memory. Dynamic attributes: CPU load, CPU utilization, available memory Compute load, CL v = � a ∈ attributes w a ∗ val va Network load: measure of load on the p2p network link Latency Bandwidth Network load, NL ( u , v ) = w lt LT ( u , v ) + w bw BW ( u , v ) Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 10 / 26

  22. Some Definitions Compute load: measure of overall load on the node Static attributes: clock speed, core count, total memory. Dynamic attributes: CPU load, CPU utilization, available memory Compute load, CL v = � a ∈ attributes w a ∗ val va Network load: measure of load on the p2p network link Latency Bandwidth Network load, NL ( u , v ) = w lt LT ( u , v ) + w bw BW ( u , v ) Avaliable processors: measure of effective number of processors pc v = coreCount v − Load v % coreCount v Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 10 / 26

  23. Some Definitions Compute load: measure of overall load on the node Static attributes: clock speed, core count, total memory. Dynamic attributes: CPU load, CPU utilization, available memory Compute load, CL v = � a ∈ attributes w a ∗ val va Network load: measure of load on the p2p network link Latency Bandwidth Network load, NL ( u , v ) = w lt LT ( u , v ) + w bw BW ( u , v ) Avaliable processors: measure of effective number of processors pc v = coreCount v − Load v % coreCount v Weights can be tuned according to program need/type. Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 10 / 26

  24. Allocation Algorithm Find candidate sub-graph corresponding to each node. Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 11 / 26

  25. Allocation Algorithm Find candidate sub-graph corresponding to each node. For each sub-graph G v = ( V v , E v ) define: Compute Load, C G v = � u ∈V v CL u Network Load, N G v = � ( x , y ) ∈E v NL ( x , y ) Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 11 / 26

  26. Allocation Algorithm Find candidate sub-graph corresponding to each node. For each sub-graph G v = ( V v , E v ) define: Compute Load, C G v = � u ∈V v CL u Network Load, N G v = � ( x , y ) ∈E v NL ( x , y ) Total Load, T G v = α × C G v Normalized + β × N G v Normalized Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 11 / 26

  27. Allocation Algorithm Find candidate sub-graph corresponding to each node. For each sub-graph G v = ( V v , E v ) define: Compute Load, C G v = � u ∈V v CL u Network Load, N G v = � ( x , y ) ∈E v NL ( x , y ) Total Load, T G v = α × C G v Normalized + β × N G v Normalized Allocate the best one on the basis of total load Ashish Kumar, Naman Jain, Preeti Malakar Network and Load-Aware Resource Manager for MPI Programs 11 / 26

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend