enabling the convergence of hpc and data analytics in
play

Enabling the convergence of HPC and Data Analytics in highly - PowerPoint PPT Presentation

Enabling the convergence of HPC and Data Analytics in highly distributed computing infrastructures Rosa M Badia 1-2 July 2019 Yale: 80 in 2019, Barcelona What was I doing when I first met Yale? Challenges in highly distributed


  1. Enabling the convergence of HPC and Data Analytics in highly distributed computing infrastructures Rosa M Badia 1-2 July 2019 Yale: 80 in 2019, Barcelona

  2. What was I doing when I first met Yale?

  3. Challenges in highly distributed infrastructures • Resources that appear and disappear • How to dynamically add/remove nodes to the infrastructure • Heterogeneity • Different HW characteristics (performance, memory, etc) • Different architectures -> compilation issues • Network • Different types of networks • Instability • Trust and Security e r e h HPC w y r e v e I A • Power constraints from the devices Exascale computing in the edge Cloud Sensors Fog devices Instruments Actuators Edge devices

  4. Data and storage challenge • Sensors and instruments as sources of large amounts of heterogeneous data • Control of edge devices and remote access to sensor data • Edge devices typically have SDcards, much slower than SSD • Compute and store close to the sensors • To avoid data transfers • For privacy/security aspects • New data storage abstractions that enable access from the different devices • Object store versus file system? • Data reduction/lossy compression • Task flow versus data flow: data streaming • Metadata and traceability

  5. Orchestration challenges • How to describe the workflows in such environment? Which is the right interface? • Focus: • Integration of computational workloads, with machine learning and data analytics • Intelligent runtime that can make scheduling and allocation, 17 d17 d17 d18 d17 d17 21 20 22 19 18 data-transfer, and other decisions d22 d21 d25 d25 d22 d23 d21 d23 d25 d20 d23 d23 d25 d20 d19 d19 d26 33 d26 29 d26 31 d24 27 d24 d26 25 d27 d24 23 36 d45 d39 d24 d42 d36 d33 d30 35 34 30 32 28 26 39 24 38 d43 d37 d58 d58 d43 d56 d37 d56 d58 d31 d56 d56 d31 d48 37 50 46 d34 d59 d59 d56 48 d57 44 d57 d57 d59 42 d60 d57 d58 40 d50 d55 d46 d40 d58 d78 d72 d53 d59 d75 d69 d66 d63 52 d53 79 54 56 49 45 43 58 41 51 d134 47 d134 d134 d135 d134 d90 d85 d76 d67 d93 d70 d76 d93 d95 d95 d70 d93 d93 81 83 82 57 84 75 73 65 69 d96 d96 80 67 d94 63 d94 d96 55 d79 d79 d137 d139 d142 d137 d73 d73 d142 d138 d140 d139 d85 d138 d92 d142 d92 d140 d140 d143 d141 d109 d95 d90 d140 d115 d142 d136 d136 d95 d112 d96 d106 d87 d87 99 91 97 87 d143 d143 89 85 102 d141 d143 d141 d141 d144 d141 d143 93 95 68 64 62 d168 d156 d165 d150 d153 d147 101 66 d104 70 d159 100 92 98 88 90 86 d171 d171 103 94 d166 d166 74 d154 d148 d110 d176 d176 d110 d176 d174 d148 d174 d176 d154 d174 d174 d116 d177 d176 76 d116 d160 d174 d151 d173 118 d124 d124 110 106 d177 116 d177 104 122 120 d177 d175 d175 d177 d175 108 d128 d175 d128 d177 114 d1 d169 d157 d202 d184 d199 d190 d181 d208 d205 d187 125 d196 126 119 107 117 111 105 123 121 109 115 d214 d214 d215 d185 d214 d214 d213 d213 128 130 131 129 127 d217 d203 d219 d222 d220 d217 d220 d219 d222 d220 d222 d218 d222 d218 d216 d220 d216 d221 d191 d203 d191 d209 d209 d223 d197 d197 142 132 d221 140 d223 d221 134 d223 d221 144 d223 d223 138 d221 146 136 148 149 d242 d227 d239 d230 d245 d236 d248 d233 143 133 141 135 145 139 147 137 d231 d253 d243 150 d237 d249 d254 d254 d254 d254 152 151 153 154 createBlockTask qrTask transposeBlockTask

  6. Programming with PyCOMPSs/COMPSs • Sequential programming, parallel execution • General purpose programming language + annotations/hints • To identify tasks and directionality of data • Task based: task is the unit of work • Builds a task graph at runtime that express potential concurrency • Exploitation of parallelism @task (c=INOUT) • … and of parallelism created later on def multiply(a, b, c): • Simple linear address space c += a*b • Agnostic of computing initialize_variables() platform startMulTime = time.time() for i in range(MSIZE): • Runtime takes all scheduling for j in range(MSIZE): and data transfer decisions for k in range(MSIZE): multiply (A[i][k], B[k][j], C[i][j]) compss_barrier () mulTime = time.time() - startMulTime

  7. Other decorators: Tasks’ constraints • Constraints enable to define HW or SW features required to execute a task • Runtime performs the match-making between the task and the computing nodes • Support for multi-core tasks and for tasks with memory constraints • Support for heterogeneity on the devices in the platform @constraint (MemorySize=6.0, ProcessorPerformance=“5000”) @task (c=INOUT) def myfunc(a, b, c): ... @constraint (MemorySize=1.0, ProcessorType =”ARM”, ) @task (c=INOUT) def myfunc_in_the_edge (a, b, c): ...

  8. Other decorators: Tasks’ constraints and versions • Constraints enable to define HW or SW features required to execute a task • Runtime performs the match-making between the task and the computing nodes • Support for multi-core tasks and for tasks with memory constraints • Support for heterogeneity on the devices in the platform • Versions: Mechanism to support multiple implementations of a given behavior (polymorphism) • Runtime selects to execute the task in the most appropriate device in the platform @constraint (MemorySize=6.0, ProcessorPerformance=“5000”) @task (c=INOUT) def myfunc(a, b, c): ... @implement (source class=”myclass”, method=”myfunc”) @constraint (MemorySize=1.0, ProcessorType =”ARM”) @task (c=INOUT) def myfunc_in_the_edge (a, b, c): ...

  9. Other decorators: linking with other programming models • A task can be more than a sequential function • A task in PyCOMPSs can be sequential, multicore or multi-node • External binary invocation: wrapper function generated automatically • Supports for alternative programming models: MPI and OmpSs • Additional decorators: • @binary(binary=“app.bin”) • @ompss(binary=“ompssApp.bin”) • @mpi(binary=“mpiApp.bin”, runner=“mpirun”, computingNodes=8) • Can be combined with the @constraint and @implement decorators @constraint (computingUnits= "248") @mpi (runner="mpirun", computingNodes= ”16”, ...) @task (returns=int, stdOutFile=FILE_OUT_STDOUT, ...) def nems(stdOutFile, stdErrFile): pass 9

  10. Failure management • Default behaviour till now: • On task failure, retry the execution a number of times • If failure persists, close the application safely • New interface than enables the programmer to give hints about failure management @task(file_path=FILE_INOUT, on_failure='CANCEL_SUCCESSORS') def task(file_path): ... if cond : raise Exception() • Options: RETRY, CANCEL_SUCCESSORS, FAIL, IGNORE • Implications on file management: • I.e, on IGNORE, output files: are generated empty • Offers the possibility of task speculation on the execution of applications • Possibility of ignoring part of the execution of the workflow, for example if a task fails in an unstable device

  11. Integration with persistent memory • Programmer may decide to make persistent specific objects in its code • Persistent objects are managed same way as regular objects • Tasks can operate with them a = SampleClass () a.make_persistent() Print a.func (3, 4) a.mytask() compss_barrier () o = a.another_object • Objects can be accessed/shared transparently in a distributed computing platform

  12. Support for elasticity • Possibility to adapt the computing infrastructure depending on the actual workload • Now also for SLURM managed systems • Feature that contributes to a more effective use of resources • Is very relevant in the edge , where power is a constraint Expanded SLURM Job X Initial SLURM Job X Master Node m Compute Node C SLURM Job Y m Compute Node B p Main App m Compute Node A p Ss COMPSs COMPSs Runtime p Ss COMPSs A COMPSs Worker Worker Ss … A Worker Compute Node N … p A Task Task p … p COMPSs Worker p … p p Task Task SLURM Connector Request for a new node SLURM creates Update original job the new job SLURM Manager

  13. Support for interactivity • Jupyter notebooks: Easy to use interface for interactivity • Where to map every component? • Everything local • Prototyping and demos • Running notebook and COMPSs runtime locally • Some tasks can be executed locally • Some tasks can run remotely Data acquisition in edge devices • Remote execution of compute intensive tasks in large clusters • • Run browser in laptop and the notebook server and COMPSs runtime in a remote server • Enables the interactive execution of large computational workflows • Issue with large HPC systems if login node does not offer remote connection • Smoother integration if JupyterHub available

  14. Integration with Machine Learning • Thanks to the Python interface, the integration with ML packages is smooth: • Tensorflow, PyTorch, ... • Tiramisu: transfer learning framework Tensorflow + PyCOMPSs + dataClay • dislib: Collection of machine learning algorithms developed on top of PyCOMPSs • Unified interface, inspired in scikit-learn (fit-predict) • Unified data acquisition methods and using an independent distributed data representation • Parallelism transparent to the user – PyCOMPSs parallelism hidden • Open source, available to the community dislib.bsc.es

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend