10 08035
play

10-08035 LA-UR- Approved for public release; distribution is - PDF document

10-08035 LA-UR- Approved for public release; distribution is unlimited. Title: Data-Intensive Computing on Numerically-Intensive Supercomputers Author(s): Ahrens James P. 113788 CCS-7, Fasel Patricia K. 090207 CCS-3, Habib Salman 109589


  1. 10-08035 LA-UR- Approved for public release; distribution is unlimited. Title: Data-Intensive Computing on Numerically-Intensive Supercomputers Author(s): Ahrens James P. 113788 CCS-7, Fasel Patricia K. 090207 CCS-3, Habib Salman 109589 T-2, Heitmann Katrin 175878 ISR-1, Hsu Chung-Hsing Oakridge National Laboratory, Lo Li-Ta 194699 CCS-7, Patchett John M. 148176 CCS-7, Williams Sean J. 230347 CCS-7, Woodring Jonathan L. 209118 CCS-7, Wu Joshua 233306 CCS-7 Intended for: 2010 Super Computing Conference Nov. 2010 Los Alamos National Laboratory, an affirmative action/equal opportunity employer, is operated by the Los Alamos National Security, LLC for the National Nuclear Security Administration of the U.S. Department of Energy under contract DE-AC52-06NA25396. By acceptance of this article, the publisher recognizes that the U.S. Government retains a nonexclusive, royalty-free license to publish or reproduce the published form of this contribution, or to allow others to do so, for U.S. Government purposes. Los Alamos National Laboratory requests that the publisher identify this article as work performed under the auspices of the U.S. Department of Energy. Los Alamos National Laboratory strongly supports academic freedom and a researcher’s right to publish; as an institution, however, the Laboratory does not endorse the viewpoint of a publication or guarantee its technical correctness. Form 836 (7/06)

  2. Data-Intensive Analysis and Visualization on Numerically-Intensive Supercomputers Abstract: With the advent of the era of petascale supercomputing, via the delivery of the Roadrunner supercomputing platform at Los Alamos National Laboratory, there is a pressing need to address the problem of visualizing massive petascale-sized results. In this presentation, I discuss progress on a number of approaches including in-situ analysis, multi-resolution out-of-core streaming and interactive rendering on the supercomputing platform. These approaches are placed in context by the emerging area of data-intensive supercomputing. Bio: James Ahrens graduated with his Ph.D. in Computer Science from the University of Washington. His dissertation topic was on a high-performance scientific visualization and experiment management system. After graduation he joined Los Alamos National Laboratory as a staff member working for the Advanced Computing Laboratory (ACL). He is currently the visualization team leader in the ACL. His research areas of interest include methods for visualizing extremely large scientific datasets, distance visualization and quantitative/comparative visualization.

  3. James Ahrens Los Alamos National Laboratory Patricia Fasel, Salman Habib, Katrin Heitmann, Chung-Hsing Hsu, Ollie Lo, John Patchett, Sean Williams, Jonathan Woodring, Joshua Wu November 2010

  4.  Numerically-intensive / HPC approach  Massive FLOPS ▪ Top 500 list – 1999 Terascale, 2009 Petascale, 2019? Exascale ▪ Roadrunner – First petaflop supercomputer – Opteron, Cell  Data-intensive supercomputing (DISC) approach  Massive data  We are exploring it by necessity for interactive scientific visualization of massive data  DISC using a traditional HPC platform

  5. Prefix Mega Giga Tera Peta Exa 10 n 10 6 10 9 10 12 10 15 10 18 Displays, Data sizes Technology and machines networks

  6. Data Intensive Super Computing (DISC)   Definition by Randal Byrant, CMU  1. Data as first-class citizen  2. High-level data oriented programming model  3. Interactive access – human in the loop  4. Reliability Large database community driver   Success of Google’s map reduce approach ▪ Hundred of processors, terabytes of data, tenth of second response time Scientific driver   Massive data from simulations, experiments, observations DISC highlights the downsides of pursuing a straight massive  FLOPS approach

  7. Explore the “Middle way” through HPC/DISC using real-world  examples from scientific visualization  Use DISC as a topic guide 1. Data as first-class citizen   In-situ analysis for Roadrunner Universe application 2. High-level data-oriented programming model   Programming visualization tools  Multi-resolution out-of-core visualization 3. Interactive access – human in the loop   Visualization on the supercomputing platform 4. Reliability 

  8.  Numerically-intensive Think hard about a data-  focused approach (data first!)  Data stored in parallel on a numerically-intensive filesystem supercomputer  Brought into system for  What specific scientific computation questions will this petascale run answer? With what  Data-intensive data?  What are the algorithms to  Computation co-located do this? with storage  Numerically-intensive / Roadrunner example  Petaflop supercomputer with a few petabytes of disk

  9.  RRU -- First petascale  RRU data challenge cosmology simulations  Individual trillion particle runs  New scalable hybrid code generate 100s of TB of raw designed for data heterogeneous  Must carry out “on the fly” architectures analysis  New algorithmic ideas for high  KD tree-based halo finder performance parallelized with particle ▪ Domain overloading with overloading particle caches ▪ Digital filtering to reduce communication across Opteron/Cell layer ▪ >50 times speed-up over conventional codes

  10.  Data reduction through in-situ feature extraction  Save every hundredth halo catalog ▪ Every output timestep, save properties and statistics of halo  Optimized performance

  11. DISC Numerically Intensive Application Application Programs Programs Machine-Independent Software Programming Model Packages Runtime Machine-Dependent System Programming Model Hardware Hardware  Programs described at very  Application programs written in low level (MPI) terms of high-level operations on data  Rely on small number of software packages  Runtime system controls scheduling, load balancing, …

  12.  Visualization  Optimize access to architectures are numerically-intensive programmable architecture  Uses a data-flow  Multi-resolution out-of- program graph… core data visualization  Visualization architectures provide their own run-time system

  13.  A decade ago - Large  Key concepts scale data, no  Streaming is the incremental visualization solutions processing the data as pieces  Los Alamos/Ahrens led  Streaming enables parallelism project to go: ▪ Pieces processed independently  Applied to all operations in the toolkit  From VTK - An open- source object-oriented ▪ Contouring, cutting, clipping, analysis visualization toolkit - www.vtk.org  To Parallel VTK  To ParaView - An open- source, scalable visualization application - www.paraview.org

  14. Each module in the pipeline can cull and prioritize… Culling – remove pieces Prioritization – order piece   processing  Based on spatial location  Based on spatial location Spatial clipping ▪ ▪ Cutting View dependent ordering ▪ Probing ▪ Frustum culling ▪  Based on features Occlusion culling ▪  Based on data value  Based on user input Contouring ▪ Thresholding ▪

  15.  Data reduction  Prioritization  Subsetting the data and  Processing most important culling data first  Continuously improve  Sampling the data from visualized results over disk to create multi-resolution time representation  Think of a progressive  Visualization and analysis refinement approach of modules in pipeline – 2D images on the web… highlighting property of the Our solution provides dataset a prioritized 3D ▪ For example - isosurface, cut progressive refinement plane, clipping approach that works within a full-featured visualization tool…

  16. 1) Send and render lowest resolution data

  17. 1) Send and render 1 2 lowest resolution data 3 4 2) Virtually split into spatial pieces and prioritize pieces

  18. 1) Send and render 1 lowest resolution data 2 3 2) Virtually split into spatial pieces and prioritize pieces 3) Send and render highest priority piece at higher resolution

  19. 1) Send and render 5 lowest resolution data 6 7 2) Virtually split into spatial pieces and prioritize pieces 3) Send and render 3 4 highest priority piece at higher resolution 1 2 4) Goto step 2 until the data is at the highest resolution

  20. 1) Send and render 4 lowest resolution data 5 6 2) Virtually split into spatial pieces and prioritize pieces 3) Send and render 2 3 highest priority piece at higher resolution 1 4) Goto step 2 until the data is at the highest resolution

  21. Lowest resolution Highest resolution Highest resolution

  22. Slide 24

  23.  In-situ & storage-based sampling-based data reduction  Can work with all data types (structured, unstructured, particle) and most algorithms with little modification  Intelligent sampling designs to provide more information in less data  Little or no processing with simpler sampling strategies (e.g., pure random)  Untransformed data with error bounds  Data in the raw; Ease concerns on unknown transformations/alterations  Probabilistic data source as a first-class citizen in visualization and analysis Slide 25

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend