performing large science experiments
play

Performing Large Science Experiments on Azure: Pitfalls and - PowerPoint PPT Presentation

Performing Large Science Experiments on Azure: Pitfalls and Solutions Wei Lu, Jared Jackson, Jaliya Ekanayake, Roger Barga, Nelson Araujo Microsoft eXtreme Computing Group CloudCom2010, Indianapolis , IN Windows Azure Application Storage


  1. Performing Large Science Experiments on Azure: Pitfalls and Solutions Wei Lu, Jared Jackson, Jaliya Ekanayake, Roger Barga, Nelson Araujo Microsoft eXtreme Computing Group CloudCom2010, Indianapolis , IN

  2. Windows Azure Application Storage Compute Fabric … CloudCom2010, Indianapolis , IN

  3. Suggested Application Model Using queues for reliable messaging To scale, add more of either Worker Role Web Role 4) Do main( ASP.NET, WCF, work IIS { … } etc. 2) Put work in 3) Get work queue from queue • Decouple the system • Absorb the bursts • resilient to the instance failure, • Easy to scale Queue CloudCom2010, Indianapolis , IN

  4. Azure Queue • Communication channel between instances Instance – Messages in the Queue is Instance reliable and durable • 7-day life time • Fault tolerance mechanism Instance – De-queued message becomes visible again after visibilityTimeout if it is not deleted • 2-hour maximum limitation – Idempotent processing CloudCom2010, Indianapolis , IN

  5. AzureBLAST • BLAST (Basic Local Alignment Search Tool) – the most important software in bioinformatics – Identify the similarity between bio-sequences • BLAST is highly computation-intensive – Large number of pairwise alignment operations – The size of sequence databases has been growing exponentially • Two choices for running large BLAST jobs – Building a local cluster – Submit jobs to NCBI or EBI BLAST task • Long job queuing time • BLAST is easy to be parallelized BLAST task – Query segmentation Splitting task BLAST task Merging Task … BLAST task CloudCom2010, Indianapolis , IN

  6. AzureBLAST Worker Web Role Job Management Role Web Worker Portal Job registration Job Web … Global Scheduler Service dispatch queue Worker Job Registry NCBI databases Blast databases, temporary data, Azure Table etc.) Database updating Azure Blob Role CloudCom2010, Indianapolis , IN

  7. All-by-All BLAST experiment • “All by All” query – Compare the database against itself – Discovering Homologs • inter-relationships of known protein sequences • Large protein database (4.2 GB size) – Totally 9,865,668 sequences • In theory100 billion sequence comparisons! • Performance estimation – would require 14 CPU-years – One of biggest BLAST jobs as far as we know CloudCom2010, Indianapolis , IN

  8. Our Solution • Allocated 3776 weighted instances – 475 extra-large instances – From three datacenters • US South Central, West Europe and North Europe • Dividing 10 million sequences into several segments – Each will be submitted to one datacenter as one job – Each segment consists of smaller partitions • Finally the job took two weeks – Total size of all outputs is ~230GB CloudCom2010, Indianapolis , IN

  9. Understanding Azure by analyzing logs • A normal log record should be 3/31/2010 6:14RD00155D3611B0 Executing the task 251523... 3/31/2010 6:25RD00155D3611B0 Execution of task 251523 is done, it takes 10.9mins 3/31/2010 6:25RD00155D3611B0 Executing the task 251553... 3/31/2010 6:44RD00155D3611B0 Execution of task 251553 is done, it takes 19.3mins 3/31/2010 6:44RD00155D3611B0 Executing the task 251600... 3/31/2010 7:02RD00155D3611B0 Execution of task 251600 is done, it takes 17.27 mins • Otherwise, something is wrong ( e.g., lost task ) 3/31/2010 8:22RD00155D3611B0 Executing the task 251774... 3/31/2010 9:50RD00155D3611B0 Executing the task 251895... 3/31/2010 11:12RD00155D3611B0 Execution of task 251895 is done, it takes 82 mins CloudCom2010, Indianapolis , IN

  10. Challenges & Pitfalls • Failures • Instance Idle time • Limitation of current Azure Queue • Performance/Cost Estimation • Minimizing the Needs for Programming CloudCom2010, Indianapolis , IN

  11. Case Study 1 North Europe datacenter, totally 34, 265 tasks processed Node replacement, Avoid using machine name in your program Almost one day delay. Try not to orchestrate instances by the tight synchronization (e.g., barrier) CloudCom2010, Indianapolis , IN

  12. Case Study 2 North Europe Data Center, totally 34,256 tasks processed All 62 nodes lost tasks and then came back in a group fashion. This is Update domain ~ 6 nodes in one group ~30 mins CloudCom2010, Indianapolis , IN

  13. Case Study 3 West Europe Datacenter; 30,976 tasks are completed, and job was killed 35 Nodes experienced the blob writing failure at same time A reasonable guess: the Fault Domain is working CloudCom2010, Indianapolis , IN

  14. Challenges & Pitfalls • Failures – Failures are expectable and unpredictable • Design with failure in mind – Most are automatically recovered by cloud • Instance Idle time • Limitation of current Azure Queue • Performance/Cost Estimation • Minimizing the Needs for Programming CloudCom2010, Indianapolis , IN

  15. Challenges & Pitfalls • Failures • Instance Idle time – Gap time between two jobs – Diversity of work load – Load imbalance • Limitation of current Azure Queue • Performance/Cost Estimation • Minimizing the Needs for Programming CloudCom2010, Indianapolis , IN

  16. Load imbalance North Europe Data center, 2058 tasks Two-day very low system throughput due to some long-tail tasks Task 56823 needs 8 hours to complete; it was re-executed by 8 nodes due to the 2-hour max value of the visibliblityTimeout of a message CloudCom2010, Indianapolis , IN

  17. Challenges & Pitfalls • Failures • Instance Idle time • Limitation of current Azure Queue – 2-hour max value of visibilityTimeout • Each individual task has to be done in 2 hours – 7-day max message life time • Entire experiment has to be done in less then 7 days • Performance/Cost Estimation • Minimizing the Needs for Programming CloudCom2010, Indianapolis , IN

  18. Challenges & Pitfalls • Failures • Instance Idle time • Limitation of current Azure Queue • Performance/Cost Estimation – The better you understand your application, the more money you can save – BLAST has about 20 arguments – VM size • Minimizing the Needs for Programming CloudCom2010, Indianapolis , IN

  19. Cirrus: Parameter Sweeping Service on Azure Worker Job Manager Role Web Role Web Scaling Worker Portal Engine Job Job registration Scheduler Web Parametric Sampling … Service Engine Filter Dispatch Queue Worker Azure Azure Blob Table CloudCom2010, Indianapolis , IN

  20. Job Manager Role Scaling Engine Job Job Definition Scheduler Parametric Sampling Engine Filter • Declarative Job definition <job name="blast"> – Derived from Nimrod <prolog> azurecopy http://.../uniref.fasta uniref.fasta – Each job can have </prolog> • Prolog <cmd> • Commands azurecopy %partition% input • blastall.exe -p blastp -d uniref.fasta Paramters -i input -o output • Azure-related opeartors azurecopy output %partition%.out – AzureCopy </cmd> <parameter name="partition"> – AzureMount < selectBlobs > – SelectBlobs <prefix>partitions/</prefix> </ selectBlobs > • Job configuration </parameter> • Minimize the programming for <configure> running legacy binaries on Azure <minInstances>2</minInstances> – BLAST <maxInstances>4</maxInstances> <shutdownWhenDone> true </shutdownWhenDone> – Bayesian Network Machine <sampling> true </sampling> Learning </configure> – Image rendering </job> CloudCom2010, Indianapolis , IN

  21. Job Manager Role Scaling Dynamic Scaling Engine Job Scheduler Parametric Sampling Engine Filter • Scaling in/out for individual job – Fit into the [min, max] window specified in the job config – Synchronous Scaling • Tasks are dispatched after the scaling is done – Asynchronous Scaling • Tasks execution and scaling operation are simultaneous • Scaling in when load imbalance happens • Scaling in when not receiving new jobs after a period of time – Or if the job is configured as “shutdown -when- done” • Usually used for the reducing job. CloudCom2010, Indianapolis , IN

  22. Job Pause-ReConfig-Resume • Each job maintains a take status table – Checkpoint by snapshotting the task table – A task can be incomplete – Fix the 7-day/ 2-hour limitation • Handle the exception optimistically – Ignore the exceptions, – retry incomplete tasks with reduced number of instance, – minimize the cost of failures • Handle the load imbalance CloudCom2010, Indianapolis , IN

  23. Performance Estimation by Sampling Job Manager Role Scaling Engine • Observation based approach Job Scheduler Parametric Sampling Engine Filter – Randomly sample the parameter space based on the sampling ration a • Only dispatch the sample tasks – scaling in only with n’ instances to save cost • Assuming the uniform distribution, the estimation is done by CloudCom2010, Indianapolis , IN

  24. Evaluation • A complete BLAST running takes 2 hours with 16 instances, • a 2%-sampling-run which achieves 96% accuracy only takes about 18 minutes with 2 instances • the overall cost for the sampling run is only 1.8% of the complete run. CloudCom2010, Indianapolis , IN

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend