part i introductory materials
play

Part I: Introductory Materials Introduction to Parallel Computing - PowerPoint PPT Presentation

Part I: Introductory Materials Introduction to Parallel Computing with R Dr. Nagiza F. Samatova Department of Computer Science North Carolina State University and Computer Science and Mathematics Division Oak Ridge National Laboratory What


  1. Part I: Introductory Materials Introduction to Parallel Computing with R Dr. Nagiza F. Samatova Department of Computer Science North Carolina State University and Computer Science and Mathematics Division Oak Ridge National Laboratory

  2. What Analysis Algorithms to Use? The Computer Science & HPC Challenges Analysis algorithms fail for a few gigabytes. Algorithmic Complexity: Algorithm Complexity Calculate means O( n ) Data Calculate FFT n 2 O( n log( n )) size, n nlog ( n ) n Calculate SVD O( r • c ) 10 -10 sec. 10 -10 sec. 10 -8 sec. 100B Clustering algorithms O( n 2 ) 10 -8 sec. 10 -4 sec. 10 -8 sec . 10KB If n =10GB, then what is 10 -6 sec. 10 -5 sec . 1MB 1 sec. O ( n ) or O ( n 2 ) on a 10 -4 sec. 10 -3 sec. teraflop computers? 100MB 3 hrs 1GB = 10 9 bytes 1Tflop 10 -2 sec. 0.1 sec . 10GB 3 yrs. = 10 12 op/sec For illustration chart assumes 10 -12 sec. ( 1Tflop/sec ) calculation time per data point

  3. Strategies to @ Computational Challenge • Reduce the amount of data for the algorithm to work on, n • Develop “better” algorithms in terms of big-O • Take advantage of parallel computers with multi-core, multi-GPU, multi- node architectures • Parallel algorithm development • Environments for parallel computing • Optimize end-to-end data analytics pipeline (I/O, data movements, etc.) 3

  4. End-to-End Data Analytics Domain Application Layer Biology Climate Fusion Interface Layer Web Workflow Dashboard Service Middleware Layer Our focus Automatic Scheduling Plug-in Parallelization Analytics Core Library Layer Streamline Parallel Distributed Data Movement, Storage, Access Layer Parallel Data Mover Indexing I/O Light

  5. Introduction to parallel computing with R A grid of CPUs A grid of CPUs •What is parallel computing? •Why should the user use parallel computing? •What are the applications of parallel computing? •What techniques can be used to achieve parallelism? •What practical issues can arise while using parallel computing? http://www.hcs.ufl.edu/~george/sci_torus.gif

  6. The world is parallel • Solar system, road • The universe is traffic, ocean patterns, inherently parallel etc. exhibit parallelism http://cosmicdiary.org/blogs/arif_solmaz/wp- http://upload.wikimedia.org/wikipedia/commons/7 content/uploads/2009/06/solar_system1.jpg /7e/Bangkok-sukhumvit-road-traffic-200503.jpg

  7. What is parallel computing? Parallel computing refers to the use of many computational resources to solve a problem. http://www.admin.technion.ac.il/pard/archives/Resea rchers/ParallelComputing.jpg

  8. Why should parallel computing be used? Parallelism during Parallelism during • Solve bigger problems faster construction of a building construction of a building • If serial computing is not viable (a large dataset or a a single CPU cannot handle the entire dataset) • Improveme computational efficiency • Save time and money

  9. Applications of parallel computing • Weather prediction • Computer graphics, networking, etc. • Image processing • Statistical analysis of financial markets • Semantic-based search of web pages • Protein folding prediction • Cryptography • Oil exploration • Circuit design and microelectronics • Nuclear physics http://www.nasm.si.edu/webimages/640/2006-937_640.jpg http://jeffmohn.files.wordpress.com/2009/04/stock_market_down2.jpg http://bfi-internal.org/dsnews/v8_no11/processing.jpg

  10. Division of problem set: Data parallel Instructions Data • Data is broken into a number of subsets. • The same instructions are executed simultaneously on different processors for different data subset .

  11. Division of problem set: Task parallel Instructions Data • Instructions are broken into a number of independent instructions. • Different instructions are executed on the same data simultaneously on different processors.

  12. Embarrassingly Parallel Computing Parallel and Independent Parallel and Independent •Solving many similar problems •Tasks are independent •Little to no need for coordination between tasks

  13. Niceties of embarrassing parallelism Autonomous Processing Autonomous Processing (Minimal Inter-Proc.-Comm.) (Minimal Inter-Proc.-Comm.) • Communication cost is lowered. • Highly efficient for large data sets. • Little bit of tweaking in code and you are ready to go!! • Suitable for MapReduce programming paradigm.

  14. Task and Data Parallelism in R Task Parallelism Data Parallelism Parallel R aims: (1) to automatically detect and execute task-parallel analyses; (2) to easily plug-in data-parallel MPI-based C/C++/Fortran codes (3) to retain high-level of interactivity, productivity and abstraction Task & Data Parallelism in pR Embarrassingly-parallel: Data-parallel: • Likelihood Maximization • k-means clustering • Sampling: Bootstrap, Jackknife • Principal Component Analysis • Markov Chain Monte Carlo Hierarchical clustering • • Animations Distance matrix, histogram •

  15. Towards Enabling Parallel Computing in R http://cran.cnr.berkeley.edu/web/views/HighPerformanceComputing.html snow (Luke Tierney): general API on top of message passing routines to provide high-level ( parallel apply ) commands; mostly demonstrated for embarrassingly parallel applications. snow API Rmpi (Hao Yu): R interface to MPI . > library ( pvm ) > .PVM.start.pvmd () rpvm (Na Li and Tony Rossini): R > .PVM.addhosts (...) interface to PVM ; requires knowledge of > .PVM.config () parallel programming.

  16. Parallel Paradigm Hierarchy Parallel Paradigms Explicit Implicit Parallelism Parallelism Rmpi rpvm Task- Data- Hybrid: Parallel Parallel Task + Data Parallel pR taskPR taskPR No or Limited Intensive Inter-Process Inter-Process Communication Communication pRapply pR multicore RScaLAPACK snow

  17. Parallel Paradigm Hierarchy Parallel Paradigms Explicit Implicit Parallelism Parallelism Rmpi rpvm Task- Data- Hybrid: Parallel Parallel Task + Data Parallel pR taskPR taskPR No or Limited Intensive Inter-Process Inter-Process Communication Communication pRapply pR multicore RScaLAPACK snow

  18. APPLY family of functions in R • apply(): Applies a function to sections of array and returns result in array. Structure: apply(array, margin, function, ...) • lapply(): Applies function to each element in a list. • Returns results in a list. Structure: lapply(list, function, ...)

  19. R’s lapply Method is a Natural Candidate for Automatic Parallelization Using R : List Result fn r 1 v 1 x = c(1:16); fn r 2 v 2 lapply(x, sqrt) Function fn v 3 r 3 fn v 4 r 4 fn fn r 5 v 5 R fn r 6 v 6 fn … … fn … … fn v n r n • Examples: Bootstrapping, Monte Carlo, etc.

  20. Existing R Packages with Parallel lapply • multicore – Limited to single-node, multi-core execution – mclapply() • pRapply – Multi-node, multi-core execution – Automatically manages all R dependencies – pRlapply() • snow – Built on Rmpi – uses MPI for communication – Requires users to explicitly manage R dependencies (libraries, variables, functions) – clusterApply()

  21. Function Input/Output, R Environment • How many inputs in fn()? Using R : • What are the inputs to the function? • What are the outputs? a=5; • How many outputs? y = matrix(1:12,3,4); • Will fn() know the value of y? fn <- function(x){ • What cbind() does? z = y+x; • What d is equal to? b = cbind(y,z); • How to return more than one output? } d=fn(a); a=5; d; y = matrix(1:12,3,4); fn <- function(x){ z = y+x; b = cbind(y,z); return(list(z,b)); } d=fn(a); d;

  22. pRapply Example pRlapply (varList, fn, procs=2, cores=2) Using R : Using pRapply : library(pRapply); library(abind); library(abind); x = as.list(1:16); x = as.list(1:16); y = matrix(1:12,3,4); y = matrix(1:12,3,4); fn <- function(x){ fn <- function(x){ z = y+x; z = y+x; w = abind(x,x); #w = abind(x,x); b = cbind(y,z); b = cbind(y,z); } } lapply(x, fn) pRlapply(x, fn) If I run on multiple machines, how non-local host would know about the R environment (e.g., y and abind ) created before function call ?

  23. snow Example: Explicit Handling of Renv R snow() End-User library(snow); library(abind); x = as.list(1:16); y = matrix(1:12,3,4); Explicitly send libraries, functions, fn <- function(x){ and variables before clusterApply() z = y+x; clusterExport(cl, "y"); w = abind(x,x); b = cbind(y,z); clusterEvalQ(cl, library(abind)); } cl = makeCluster(c(numProcs=4), type = "MPI") clusterApply(cl, x, fn); stopCluster(cl);

  24. pR Automatic Parallelization Uses a 2-Tier Execution Strategy R End-User System R Worker lapply(list, function) C 4 C 2 C 1 C 3 R Worker list C 4 C 2 MPI pR C 1 C 3 C 4 C 2 C 1 C 3 C i = i th core R Worker

  25. MUTICORE package and mclapply() • Multicore provides a way for parallel computing in R. • Jobs share the entire initial work space. • Provides method for result collection.

  26. Multicore’s mclapply(): lapply(Serial) � mclapply(Parallel) • Function mclapply() is the parallelized notion of lapply(). • Takes several arguments in addition to lapply(). • Arguments are used to set up parallel environment. • By default input list is split into as many parts as there are cores. • Returns the result in a list.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend